Arrant Pedantry

By

Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.

By

Funner Grammar

As I said in the addendum to my last post, maybe I’m not so ready to abandon the technical definition of grammar. In a recent post on Copyediting, Andrea Altenburg criticized the word funner in an ad for Chuck E. Cheese as “improper grammar”, and my first reaction was “That’s not grammar!”

That’s not entirely accurate, of course, as Matt Gordon pointed out to me on Twitter. The objection to funner was originally grammatical, and the Copyediting post does make an appeal to grammar. The argument goes like this: fun is properly a noun, not an adjective, and as a noun, it can’t take comparative or superlative degrees—no funner or funnest.

This seems like a fairly reasonable argument—if a word isn’t an adjective, it can’t inflect like one—but it isn’t the real argument. First of all, it’s not really true that fun was originally a noun. As Ben Zimmer explains in “Dear Apple: Stop the Funnification”, the noun fun arose in the late seventeenth century and was labeled by Samuel Johnson in the mid-1800s “as ‘a low cant word’ of the criminal underworld.” But the earliest citation for fun is as a verb, fourteen years earlier.

As Merriam-Webster’s Dictionary of English Usage
notes, “A couple [of usage commentators] who dislike it themselves still note how nouns have a way of turning into adjectives in English.” Indeed, this sort of functional shift—also called zero derivation or conversion by linguists because they change the part of speech without the means of prefixation or suffixation—is quite common in English. English lacks case endings and has little in the way of verbal endings, so it’s quite easy to change a word from one part of speech to another. The transformation of fun from a verb to a noun to an inflected adjective came slowly but surely.

As this great article explains, shifts in function or meaning usually happen in small steps. Once fun was established as a noun, you could say things like We had fun. This is unambiguously a noun—fun is the object of the verb have. But then you get constructions like The party was fun. This is structurally ambiguous—both nouns and adjectives can go in the slot after was.

This paves the way to analyze fun as an adjective. It then moved into attributive use, directly modifying a following noun, as in fun fair. Nouns can do this too, so once again the structure was ambiguous, but it was evidence that fun was moving further in the direction of becoming an adjective. In the twentieth century it started to be used in more unambiguously adjectival roles. MWDEU says that this accelerated after World War II, and Mark Davies COHA shows that it especially picked up in the last twenty years.

Once fun was firmly established as an adjective, the inflected forms funner and funnest followed naturally. There are only a handful of hits for either in COCA, which attests to the fact that they’re still fairly new and relatively colloquial. But let’s get back to Altenburg’s post.

She says that fun is defined as a noun and thus can’t be inflected for comparative or superlative forms, but then she admits that dictionaries also define fun as an adjective with the forms funner and funnest. But she waves away these definitions by saying, “However, dictionaries are starting to include more definitions for slang that are still not words to the true copyeditor.”

What this means is that she really isn’t objecting to funner on grammatical grounds (at least not in the technical sense); her argument simply reduces to an assertion that funner isn’t a word. But as Stan Carey so excellently argued, “‘Not a word’ is not an argument”. And even the grammatical objections are eroding; many people now simply assert that funner is wrong, even if they accept fun as an adjective, as Grammar Girl says here:

Yet, even people who accept that “fun” is an adjective are unlikely to embrace “funner” and “funnest.” It seems as if language mavens haven’t truly gotten over their irritation that “fun” has become an adjective, and they’ve decided to dig in their heels against “funner” and “funnest.”

It brings to mind the objection against sentential hopefully. Even though there’s nothing wrong with sentence adverbs or with hopefully per se, it was a new usage that drew the ire of the mavens. The grammatical argument against it was essentially a post hoc justification for a ban on a word they didn’t like.

The same thing has happened with funner. It’s perfectly grammatical in the sense that it’s a well-formed, meaningful word, but it’s fairly new and still highly informal and colloquial. (For the record, it’s not slang, either, but that’s a post for another day.) If you don’t want to use it, that’s your right, but stop saying that it’s not a word.

By

Which Hunting

I meant to blog about this several weeks ago, when the topic came up in my corpus linguistics class from Mark Davies, but I didn’t have time then. And I know the that/which distinction has been done to death, but I thought this was an interesting look at the issue that I hadn’t seen before.

For one of our projects in the corpus class, we were instructed to choose a prescriptive rule and then examine it using corpus data, determining whether the rule was followed in actual usage and whether it varied over time, among genres, or between the American and British dialects. One of my classmates (and former coworkers) chose the that/which rule for her project, and I found the results enlightening.

She searched for the sequences “[noun] that [verb]” and “[noun] which [verb],” which aren’t perfect—they obviously won’t find every relative clause, and they’ll pull in a few non-relatives—but the results serve as a rough measurement of their relative frequencies. What she found is that before about the 1920s, the two were used with nearly equal frequency. That is, the distinction did not exist. After that, though, which takes a dive and that surges. The following chart shows the trends according to Mark Davies’ Corpus of Historical American English and his Google Books N-grams interface.

It’s interesting that although the two corpora show the same trend, Google Books lags a few decades behind. I think this is a result of the different style guides used in different genres. Perhaps style guides in certain genres picked up the rule first, from whence it disseminated to other style guides. And when we break out the genres in COHA, we see that newspapers and magazines lead the plunge, with fiction and nonfiction books following a few decades later, though use of which is apparently in a general decline the entire time. (NB: The data from the first decade or two in COHA often seems wonky; I think the word counts are low enough in those years that strange things can skew the numbers.)

Proportion of "which" by genres

The strange thing about this rule is that so many people not only take it so seriously but slander those who disagree, as I mentioned in this post. Bryan Garner, for instance, solemnly declares—without any evidence at all—that those who don’t follow the rule “probably don’t write very well,” while those who follow it “just might.”[1] (This elicited an enormous eye roll from me.) But Garner later tacitly acknowledges that the rule is an invention—not by the Fowler brothers, as some claim, but by earlier grammarians. If the rule did not exist two hundred years ago and was not consistently enforced until the 1920s or later, how did anyone before that time ever manage to write well?

I do say enforced, because most writers do not consistently follow it. In my research for my thesis, I’ve found that changing “which” to “that” is the single most frequent usage change that copy editors make. If so many writers either don’t know the rule or can’t apply it consistently, it stands to reason that most readers don’t know it either and thus won’t notice the difference. Some editors and grammarians might take this as a challenge to better educate the populace on the alleged usefulness of the rule, but I take it as evidence that it’s just not useful. And anyway, as Stan Carey already noted, it’s the commas that do the real work here, not the relative pronouns. (If you’ve already read his post, you might want to go and check it out again. He’s added some updates and new links to the end.)

And as I noted in my previous post on relatives, we don’t observe a restrictive/nonrestrictive distinction with who(m) or, for that matter, with relative adverbs like where or when, so at the least we can say it’s not a very robust distinction in the language and certainly not necessary for comprehension. As with so many other useful distinctions, its usefulness is taken to be self-evident, but the evidence of its usefulness is less than compelling. It seems more likely that it’s one of those random things that sometimes gets grammaticalized, like gender or evidentiality. (Though it’s not fully grammaticalized, because it’s not obligatory and is not a part of the natural grammar of the language, but is a rule that has to be learned later.)

Even if we just look at that and which, we find a lot of exceptions to the rule. You can’t use that as the object of a preposition, even when it’s restrictive. You can’t use it after a demonstrative that, as in “Is there a clear distinction between that which comes naturally and that which is forced, even when what’s forced looks like the real thing?” (I saw this example in COCA and couldn’t resist.) And Garner even notes “the exceptional which”, which is often used restrictively when the relative clause is somewhat removed from its noun.[2] And furthermore, restrictive which is frequently used in conjoined relative clauses, such as “Eisner still has a huge chunk of stock options—about 8.7 million shares’ worth—that he can’t exercise yet and which still presumably increase in value over the next decade,” to borrow an example from Garner.[3]

Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in its formulation. I’ll explain what I think is going on with that and which in a later post.

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”
  2. [2] S.v. “Remote Relatives. B. The Exceptional which.”
  3. [3] S.v. “which. D. And which; but which..”

By

Rules, Regularity, and Relative Pronouns

The other day I was thinking about relative pronouns and how they get so much attention from usage commentators, and I decided I should write a post about them. I was beaten to the punch by Stan Carey, but that’s okay, because I think I’m going to take it in a somewhat different direction. (And anyway, great minds think alike, right? But maybe you should read his post first, along with my previous post on who and that, if you haven’t already.)

I’m not just talking about that and which but also who, whom, and whose, which is technically a relative possessive adjective. Judging by how often relative pronouns are talked about, you’d assume that most English speakers can’t get them right, even though they’re among the most common words in the language. In fact, in my own research for my thesis, I’ve found that they’re among the most frequent corrections made by copy editors.

So what gives? Why are they so hard for English speakers to get right? The distinctions are pretty clear-cut and can be found in a great many usage and writing handbooks. Some commentators even judgementally declare, “There’s a useful distinction here, and it’s lazy or perverse to pretend otherwise.” But is it really useful, and is it really lazy and perverse to disagree? Or is it perverse to try to inflict a bunch of arbitrary distinctions on speakers and writers?

And arbitrary they are. Many commentators act as if the proposed distinctions between all these words would make things tidier and more regular, but in fact it makes the whole system much more complicated. On the one hand, we have the restrictive/nonrestrictive distinction between that and which. On the other hand, we have the animate/inanimate (or human/nonhuman, if you want to be really strict) distinction between who and that/which. And on the other other hand, there’s the subject/object distinction between who and whom. But there’s no subject/object distinction with that or which, except when it’s the object of a preposition—then you have to use which, unless the preposition is stranded, in which case you can use that. And on the final hand, some people have proscribed whose as an inanimate or nonhuman relative possessive adjective, recommending constructions with of which instead, though this rule isn’t as popular, or at least not as frequently talked about, as the others. (How many hands is that? I’ve lost count.)

Simple, right? To make it all a little clear, I’ve even put it into a nice little table.

The proposed relative pronoun system

This is, in a nutshell, a very lopsided and unusual system. In a comment on my who/that post, Elaine Chaika says, “No natural grammar rule would work that way. Ever.” I’m not entirely convinced of that, because languages can be surprising in the unusual distinctions they make, but I agree that it is at the least typologically unusual.

“But we have to have rules!” you say. “If we don’t, we’ll have confusion!” But we do have rules—just not the ones that are proposed and promoted. The system we really have, in absence of the prescriptions, is basically a distinction between animate who and inanimate which with that overlaying the two. Which doesn’t make distinctions by case, but who(m) does, though this distinction is moribund and has probably only been kept alive by the efforts of schoolteachers and editors.

Whom is still pretty much required when it immediately follows a preposition, but not when the preposition is stranded. Since preposition stranding is extremely common in speech and increasingly common in writing, we’re seeing less and less of whom in this position. Whose is still a little iffy with inanimate referents, as in The house whose roof blew off, but many people say this is alright. Others prefer of which, though this can be awkward: The house the roof of which blew off.

That is either animate or inanimate—only who/which make that distinction—and can be either subject or object but cannot follow a preposition or function as a possessive adjective or nonrestrictively. If the preposition is stranded, as in The man that I gave the apple to, then it’s still allowed. But there’s no possessive thats, so you have to use whose of of which. Again, it’s clearer in table form:

The natural system of relative pronouns

The linguist Jonathan Hope wrote that several distinguishing features of Standard English give it “a typologically unusual structure, while non-standard English dialects follow the path of linguistic naturalness.” He then muses on the reason for this:

One explanation for this might be that as speakers make the choices that will result in standardisation, they unconsciously tend towards more complex structures, because of their sense of the prestige and difference of formal written language. Standard English would then become a ‘deliberately’ difficult language, constructed, albeit unconsciously, from elements that go against linguistic naturalness, and which would not survive in a ‘natural’ linguistic environment.[1]

It’s always tricky territory when you speculate on people’s unconscious motivations, but I think he’s on to something. Note that while the prescriptions make for a very asymmetrical system, the system that people naturally use is moving towards a very tidy and symmetrical distribution, though there are still a couple of wrinkles that are being worked out.

But the important point is that people already follow rules—just not the ones that some prescriptivists think they should.

  1. [1] “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000), 53.

By

Scriptivists Revisited

Before I begin: I know—it’s been a terribly, horribly, unforgivably long time since my last post. Part of it is that I’m often busy with grad school and work and family, and part of it is that I’ve been thinking an awful lot lately about prescriptivism and descriptivism and linguists and editors and don’t really know where to begin.

I know that I’ve said some harsh things about prescriptivists before, but I don’t actually hate prescriptivism in general. As I’ve said before, prescriptivism and descriptivism are not really diametrically opposed, as some people believe they are. Stan Carey explores some of the common ground between the two in a recent post, and I think there’s a lot more to be said about the issue.

I think it’s possible to be a descriptivist and prescriptivist simultaneously. In fact, I think it’s difficult if not impossible to fully disentangle the two approaches. The fact is that many or most prescriptive rules are based on observed facts about the language, even though those facts may be incomplete or misunderstood in some way. Very seldom does anyone make up a rule out of whole cloth that bears no resemblance to reality. Rules often arise because someone has observed a change or variation in the language and is seeking to slow or reverse that change (as in insisting that “comprised of” is always an error) or to regularize the variation (as in insisting that “which” be used for nonrestrictive relative clauses and “that” for restrictive ones).

One of my favorite language blogs, Motivated Grammar, declares “Prescriptivism must die!” but to be honest, I’ve never quite been comfortable with that slogan. Now, I love a good debunking of language myths as much as the next guy—and Gabe Doyle does a commendable job of it—but not all prescriptivism is a bad thing. The impulse to identify and fix potential problems with the language is a natural one, and it can be used for both good and ill. Just take a look at the blogs of John E. McIntyre, Bill Walsh, and Jan Freeman for examples of well-informed, sensible language advice. Unfortunately, as linguists and many others know, senseless language advice is all too common.

Linguists often complain about and debunk such bad language advice—and rightly so, in my opinion—but I think in doing so they often make the mistake of dismissing prescriptivism altogether. Too often linguists view prescriptivism as an annoyance to be ignored or as a rival approach that must be quashed, but either way they miss the fact that prescriptivism is a metalinguistic phenomenon worth exploring and understanding. And why is it worth exploring? Because it’s an essential part of how ordinary speakers—and even linguists—use language in their daily lives, whether they realize it or not.

Contrary to what a lot of linguists say, language isn’t really a natural phenomenon—it’s a learned behavior. And as with any other human behavior, we generally strive to make our language match observed standards. Or as Emily Morgan so excellently says in a guest post on Motivated Grammar, “Language is something that we as a community of speakers collectively create and reinvent each time we speak.” She says that this means that language is “inextricably rooted in a descriptive generalization about what that community does,” but it also means that it is rooted in prescriptive notions of language. Because when speakers create and reinvent language, they do so by shaping their language to fit listeners’ expectations.

That is, for the most part, there’s no difference in speakers’ minds between what they should do with language and what they do do with language. They use language the way they do because they feel as though they should, and this in turn reinforces the model that influences everyone else’s behavior. I’ve often reflected on the fact that style guides like The Chicago Manual of Style will refer to dictionaries for spelling issues—thus prescribing how to spell—but these dictionaries simply describe the language found in edited writing. Description and prescription feed each other in an endless loop. This may not be mathematical logic, but it is a sort of logic nonetheless. Philosophers love to say that you can’t derive an ought from an is, and yet people do nonetheless. If you want to fit in with a certain group, then you should behave in a such a way as to be accepted by that group, and that group’s behavior is simply an aggregate of the behaviors of everyone else trying to fit in.

And at this point, linguists are probably thinking, “And people should be left alone to behave the way they wish to behave.” But leaving people alone means letting them decide which behaviors to favor and which to disfavor—that is, which rules to create and enforce. Linguists often criticize those who create and propagate rules, as if such rules are bad simply as a result of their artificiality, but, once again, the truth is that all language is artificial; it doesn’t exist until we make it exist. And if we create it, why should we always be coolly dispassionate about it? Objectivity might be great in the scientific study of language, but why should language users approach language the same way? Why should we favor “natural” or “spontaneous” changes and yet disfavor more conscious changes?

This is something that Deborah Cameron addresses in her book Verbal Hygiene (which I highly, highly recommend)—the notion that “spontaneous” or “natural” changes are okay, while deliberate ones are meddlesome and should be resisted. As Cameron counters, “If you are going to make value judgements at all, then surely there are more important values than spontaneity. How about truth, beauty, logic, utility?” (1995, 20). Of course, linguists generally argue that an awful lot of prescriptions do nothing to create more truth, beauty, logic, or utility, and this is indeed a problem, in my opinion.

But when linguists debunk such spurious prescriptions, they miss something important: people want language advice from experts, and they’re certainly not getting it from linguists. The industry of bad language advice exists partly because the people who arguably know the most about how language really works—the linguists—aren’t at all interested in giving advice on language. Often they take the hands-off attitude exemplified in Robert Hall’s book Leave Your Language Alone, crying, “Linguistics is descriptive, not prescriptive!” But in doing so, linguists are nonetheless injecting themselves into the debate rather than simply observing how people use language. If an objective, hands-off approach is so valuable, then why don’t linguists really take their hands off and leave prescriptivists alone?

I think the answer is that there’s a lot of social value in following language rules, whether or not they are actually sensible. And linguists, being the experts in the field, don’t like ceding any social or intellectual authority to a bunch of people that they view as crackpots and petty tyrants. They chafe at the idea that such ill-informed, superstitious advice—what Language Log calls “prescriptivist poppycock”—can or should have any value at all. It puts informed language users in the position of having to decide whether to follow a stupid rule so as to avoid drawing the ire of some people or to break the rule and thereby look stupid to those people. Arnold Zwicky explores this conundrum in a post titled “Crazies Win.”

Note something interesting at the end of that post: Zwicky concludes by giving his own advice—his own prescription—regarding the issue of split infinitives. Is this a bad thing? No, not at all, because prescriptivism is not the enemy. As John Algeo said in an article in College English, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense” (“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 [December 1969]: 276). As I’ve said before, the nonsense is abundant. Just look at this awful Reader’s Digest column or this article on a Monster.com site for teachers for a couple recent examples.

Which brings me back to a point I’ve made before: linguists need to be more involved in not just educating the public about language, but in giving people the sensible advice they want. Trying to kill prescriptivism is not the answer to the language wars, and truly leaving language alone is probably a good way to end up with a dead language. Exploring it and trying to figure out how best to use it—this is what keeps language alive and thriving and interesting. And that’s good for prescriptivists and descriptivists alike.

%d bloggers like this: