Arrant Pedantry


Is Change Okay or Not?

A few weeks ago I got into a bit of an argument with my coworkers in staff meeting. One of them had asked our editorial interns to do a brief presentation on the that/which rule in our staff meeting, and they did. But one of the interns seemed a little unclear on the rule—she said she had learned the rule in her class on modern American usage, but she had also learned that either that or which is technically fine with restrictive clauses. So of course I asked if I could chime in.

I pointed out that the rule—which states that you should always use that for restrictive clauses (except where that is grammatically impermissible, as when the relative pronoun follows a preposition or the demonstrative pronoun that)—is a relatively recent invention and that it didn’t really start to take hold in American usage until the mid-twentieth century. Many writers still don’t follow it, which means that editors have a lot of opportunities to apply the rule, and it’s generally not enforced outside the US.

My coworkers didn’t really like the perceived implication that the rule is bogus and that we shouldn’t worry about it, and one of them countered by saying that it didn’t matter what people did in 1810—the history is interesting, but we should be concerned about what usage is now. After all, the data clearly shows that the that/which rule is being followed in recent publications. And then she deployed an argument I’ve been seeing more and more lately: we all know that language changes, so why can’t we accept this change? (I’ve also heard variations like “Language changes, so why can’t we make it change this way?”)

These are good questions, and I don’t believe that linguists have good answers to them. (Indeed, I’m not even sure that good answers—or at least logically sound answers—are even possible.) In her book Verbal Hygiene, the linguist Deborah Cameron argues that it’s silly for linguists to embrace change from below but to resist change from above. What makes a “natural” change better than an unnatural one? We talk about how language changes, but it’s really people who change language, not language that changes by itself, so is there even a meaningful difference between natural and unnatural change?

Besides, many linguists have embraced certain unnatural changes, such as the movements for gender-neutral and plain language. Why is it okay for us to denounce prescriptivism on the one hand and then turn around and prescribe gender-neutral language on the other?

I haven’t come to a firm conclusion on this myself, but I think it all comes down to whether the alleged problem is in fact a problem and whether the proposed solution is in fact a solution. Does it solve the problem, does it do nothing, or does it simply create a new or different problem?

With gender-specific language, it’s clear that there’s a problem. Even though he is purportedly gender-neutral when its antecedent is indefinite or of unspecified gender, studies have shown that readers are more likely to assume that its antecedent is male. Clearly it’s not really gender-neutral if most people think “male” when they read “he”. Singular they has centuries of use behind it, including use by many great authors, and most people use it naturally and unselfconsciously. It’s not entirely uncontroversial, of course, but acceptance is growing, even among copy editors.

There are some minor thorny issues, like trying to figure out what the gender-neutral forms of freshman or fisherman should be, but writing around these seems like a small price to pay for text that treats people equally.

So what about the that/which rule? What problem does it claim to solve, and does it actually solve it?

The claim is that the rule helps distinguish between restrictive and nonrestrictive relative clauses, which in the abstract sounds like a good thing. But the argument quickly falls apart when you look at how other relative clauses work in English. We don’t need any extra help distinguishing between restrictive and nonrestrictive clauses with who, where, or when—the comma (or, in speech, the intonation) tells you whether a clause is restrictive. The fact that nobody has even recognized ambiguity with restrictive who or where or when as a problem, let alone proposed and implemented a solution, argues against the idea that there’s something wrong with restrictive which. Furthermore, no language I’ve heard of distinguishes between restrictive and nonrestrictive clauses with different pronouns. If it were really an advantage, then we’d expect to see languages all over the world with a grammaticalized distinction between restrictive and nonrestrictive clauses.

I’ve sometimes seen the counterargument that writers don’t always know how to use commas properly, so we can’t trust them to mark whether a clause is restrictive or not; but again, nobody worries about this with other relative clauses. And anyway, if copy editors can always identify when a clause is restrictive and thus know when to change which to that, then it stands to reason that they can also identify when a clause is nonrestrictive and and thus insert the commas if needed. (Though it’s not clear if even the commas are really necessary; in German, even restrictive clauses are set off with commas in writing, so you have to rely on context and common sense to tell you which kind of clause it is.)

It seems, then, that restrictive which is not a real problem at all and that insisting on that for all restrictive clauses doesn’t really accomplish anything. Even though Deborah Cameron criticizes linguists for accepting natural changes and rejecting unnatural ones, she also recognizes that many of the rules that copy editors impose, including the that/which rule, go far beyond what’s necessary for effective communication. She even quotes one scholar as saying that the that/which rule’s “sole virtue . . . is to give copy editors more billable hours.”

Some would argue that changing which to that doesn’t take much time, so there’s really no cost, but I don’t believe that’s true. My own research shows that it’s one of the most common usage or grammar changes that editors make. All those changes add up. I also know from experience that a lot of editors gripe about people not following the rule. That griping has a real effect on people, making them nervous about their abilities with their own native language. Even if you think the that/which rule is useful enough to justify the time it takes to impose it, is it worth making so many people feel self-conscious about their language?

Even if you believe that the that/which rule is an improvement, the fact is that English existed for nearly 1500 years without it, and even now it’s probably safe to say that the vast majority of English speakers have never heard of it. Although corpus data makes it appear as though it’s taken hold in American English, all we can really say from this data is that it has taken hold in edited, published American English, which really means that it’s taken hold among American copy editors. I’m sure some writers have picked the rule up from their English classes or from Word’s grammar checker, but I think it’s safe to say that American English as a whole has not changed—only the most visible portion, published writing, has.

So it’s rather disingenuous to say that the language has changed and thus we should accept the that/which rule as a valid part of Standard English. The argument is entirely circular: editors should enforce the rule because editors have been enforcing the rule now for a few decades. The fact that they have been enforcing the rule rather successfully doesn’t tell us whether they should be enforcing the rule.

Of course, that’s the fundamental problem with all prescriptions—sooner or later, you run into the is–ought problem. That is, it’s logically impossible to derive a prescriptive statement (one that tells you what you ought to do) from a descriptive one (one that states what is). Any statement like “This feature has been in use for centuries, so it’s correct” or “Shakespeare and Jane Austen used this feature, so it’s correct” or even “This feature is used by a majority of speakers today, so it’s correct” is technically a logical fallacy.

While acknowledging that nothing can definitively tell us what usage rules we should or shouldn’t follow, I still think we can come to a general understanding of which rules are worth following and which ones aren’t by looking at several different criteria:

  1. Historical use
  2. Modern use
  3. Oral use
  4. Edited written use
  5. Unedited written use
  6. Use by literary greats
  7. Common use

No single criterion is either necessary or sufficient to prove that a rule should be followed, but by looking at the totality of the usage evidence, we can get a good sense of where the rule came from, who uses it and in which contexts they use it, whether use is increasing or decreasing, and so on. So something might not be correct just because Chaucer or Shakespeare or Austen used it, but if something has been in continuous use for centuries by both literary greats and common people in both speech and writing, then it’s hard to maintain that it’s an error.

And if a rule is only followed in modern edited use, as the that/which rule is (and even then, it’s primarily modern edited American use), then it’s likewise hard to insist that this is a valid rule that all English speakers should be following. Again, the fact that editors have been enforcing a rule doesn’t tell us whether they should. Editors are good at learning and following rules, and we’re often good at pointing out holes or inconsistencies in a text or making it clearer and more readable, but this doesn’t mean that we have any special insight into what the grammar of English relative clauses should be, let alone the authority to insist that everyone follow our proposed changes.

So we can’t—or, at least, I think we shouldn’t—simply say that language has changed in this instance and that therefore we should all follow the rule. Language change is not necessarily good or bad, but it’s important to look at who is changing the language and why. If most people are changing the language in a particular way because they find that change genuinely useful, then it seems like a good thing, or at least a harmless thing. But if the change is being imposed by a small group of disproportionately powerful people for dubious reasons, and if the fact that this group has been successful is then used as evidence that the change is justified, then I think we should be skeptical.

If you want the language to change in a particular way, then the burden of proof is on you to demonstrate why you’re right and four hundred million native speakers are wrong. Until then, I’ll continue to tell our intern that what she learned in class was right: either that or which is fine.


This Is Not the Grammatical Promised Land

I recently became aware of a column in the Chicago Daily Herald by the paper’s managing editor, Jim Baumann, who has taken upon himself the name Grammar Moses. In his debut column, he’s quick to point out that he’s not like the real Moses—“My tablets are not carved in stone. Grammar is a fluid thing.”

He goes on to say, “Some of the rules we learned in high school have evolved with us. For instance, I don’t know a lot of people outside of church who still employ ‘thine’ in common parlance.” (He was taught in high school to use thine in common parlance?)

But then he ends—after a rather lengthy windup—with the old shibboleth of using anxious to mean eager. He says that “generally speaking, the word you’re grasping for is ‘eager,’” ending with the admonition, “Write carefully!”

But as Merriam-Webster’s Dictionary of English Usage notes, this rule is an invention in American usage dating to the early 1900s, and anxious had been used to mean eager for 160 years before the rule proscribing this use was invented. They conclude, “Anyone who says that careful writers do not use anxious in its ‘eager’ sense has simply not examined the available evidence.”

Not a good start for a column that aims for a grammatical middle ground.

And Baumann certainly seems to think he’s aiming for the middle ground. In a later column, he says, “Grammarians fall along a spectrum. There are the fundamentalists, who hold their 50-year-old texts as close to their bosoms as one might a Bible. There are the libertines, who believe that if it feels or sounds right, use it. . . . You’ll find me somewhere in the middle.” He again insists that he’s not a grammar fundamentalist before launching into more invented rules: the supposed misuse of like to mean “such as” or “including” and feel to mean “think”.

He says, “If you listen to a car dealer’s pitch that a new SUV has features like anti-lock brakes and a deluxe stereo, do you really know what you’re getting? Nope. Because ‘like’ means similar to, but not the same.” The argument here is simple, straightforward, and completely wrong.

First, it assumes an overly narrow definition of like. Second, it pretends complete ignorance of any meaning outside of that narrow definition. If a car salesperson tells you that a new SUV has features like anti-lock brakes and a deluxe stereo, you know exactly what you’re getting. In technical terms, pretending that you don’t understand someone is called engaging in uncooperative communication. In layman’s terms, it’s called being an ass.

And yet, strangely, Baumann promotes this rule on the basis of clarity. He says that if something is clear to 9 out of 10 readers, then it’s acceptable, but if you can write something that’s clear to all your readers, then that’s even better. While it’s certainly a good idea to make sure your writing is clear to everyone, I’m also fairly certain that no one would be legitimately confused by “features like anti-lock brakes”. Merriam-Webster’s Dictionary of English Usage doesn’t have much to say on the subject, but it lists several examples and says, “In none of the examples that follow can you detect any ambiguity of meaning.” The supposed lack of clarity simply isn’t there.

Baumann ends by saying, “The lesson is: Think about whom you’re talking to and learn to appreciate his or her or their sensitivities. Then you will achieve clarity.” The problem is that we don’t really know who our readers are and what their sensitivities are. Instead we simply internalize new rules that we learn, and then we project them onto a sort of perversely idealized reader, one who is not merely bothered by such alleged misuses but is impossibly confused by them. How do we know that they’re really confused—or even just irritated—by like to mean “such as” or “including”? We don’t. We just assume that they’re out there and that it’s our job to protect them.

My advice is to try to be as informed as possible about the rules. Be curious, and be willing to question not just others’ claims about the language but also your own assumptions. Read a lot, and pay attention to how good writing works. Get a good usage dictionary and use it. And don’t follow Grammar Moses unless you like wandering in the grammatical wilderness.


Language, Logic, and Correctness

In “Why Descriptivists Are Usage Liberals”, I said that there some logical problems with declaring something to be right or wrong based on evidence. A while back I explored this problem in a piece titled “What Makes It Right?” over on Visual Thesaurus.

The terms prescriptive and descriptive were borrowed from philosophy, where they are used to talk about ethics, and the tension between these two approaches is reflected in language debates today. The questions we have today about correct usage are essentially the same questions philosophers have been debating since the days of Socrates and Plato: what is right, and how do we know?

As I said on Visual Thesaurus, all attempts to answer these questions run into a fundamental logical problem: just because something is doesn’t mean it ought to be. Most people are uncomfortable with the idea of moral relativism and believe at some level that there must be some kind of objective truth. Unfortunately, it’s not entirely clear just where we find this truth or how objective it really is, but we at least operate under the convenient assumption that it exists.

But things get even murkier when we try to apply this same assumption to language. While we may feel safe saying that murder is wrong and would still be wrong even if a significant portion of the population committed murder, we can’t safely make similar arguments about language. Consider the word bird. In Old English, the form of English spoken from about 500 AD to about 1100 AD, the word was brid. Bird began as a dialectal variant that spread and eventually supplanted brid as the standard form by about 1600. Have we all been saying this word wrong for the last four hundred years or so? Is saying bird just as wrong as saying nuclear as nucular?

No, of course not. Even if it had been considered an error once upon a time, it’s not an error anymore. Its widespread use in Standard English has made it standard, while brid would now be considered an error (if someone were to actually use it). There is no objectively correct form of the word that exists independent of its use. That is, there is no platonic form of the language, no linguistic Good to which a grammarian-king can look for guidance in guarding the city.

This is why linguistics is at its core an empirical endeavor. Linguists concern themselves with investigating linguistic facts, not with making value judgements about what should be considered correct or incorrect. As I’ve said before, there are no first principles from which we can determine what’s right and wrong. Take, for example, the argument that you should use the nominative form of pronouns after a copula verb. Thus you should say It is I rather than It is me. But this argument assumes as prior the premise that copula verbs work this way and then deduces that anything that doesn’t work this way is wrong. Where would such a putative rule come from, and how do we know it’s valid?

Linguists often try to highlight the problems with such assumptions by pointing out, for example, that French requires an object pronoun after the copula (in French you say c’est moi [it’s me], not c’est je [it’s I]) or that English speakers, including renowned writers, have long used object forms in this position. That is, there is no reason to suppose that this rule has to exist, because there are clear counterexamples. But then, as I said before, some linguists leave the realm of strict logic and argue that if everyone says it’s me, then it must be correct.

Some people then counter by calling this argument fallacious, and strictly speaking, it is. Mededitor has called this the Jane Austen fallacy (if Jane Austen or some other notable past writer has done it, then it must be okay), and one commenter named Kevin S. has made similar arguments in the comments on Kory Stamper’s blog, Harmless Drudgery.

There, Kevin S. attacked Ms. Stamper for noting that using lay in place of lie dates at least to the days of Chaucer, that it is very common, and that it “hasn’t managed to destroy civilization yet.” These are all objective facts, yet Kevin S. must have assumed that Ms. Stamper was arguing that if it’s old and common, it must be correct. In fact, she acknowledged that it is nonstandard and didn’t try to argue that it wasn’t or shouldn’t be. But Kevin S. pointed out a few fallacies in the argument that he assumed that Ms. Stamper was making: an appeal to authority (if Chaucer did it, it must be okay), the “OED fallacy” (if it has been used that way in the past, it must be correct), and the naturalistic fallacy, which is deriving an ought from an is (lay for lie is common; therefore it ought to be acceptable).

And as much as I hate to say it, technically, Kevin S. is right. Even though he was responding to an argument that hadn’t been made, linguists and lexicographers do frequently make such arguments, and they are in fact fallacies. (I’m sure I’ve made such arguments myself.) Technically, any argument that something should be considered correct or incorrect isn’t a logical argument but a persuasive one. Again, this goes back to the basic difference between descriptivism and prescriptivism. We can make statements about the way English appears to work, but making statements about the way English should work or the way we think people should feel about it is another matter.

It’s not really clear what Kevin S.’s point was, though, because he seemed to be most bothered by Ms. Stamper’s supposed support of some sort of flabby linguistic relativism. But his own implied argument collapses in a heap of fallacies itself. Just as we can’t necessarily call something correct just because it occurred in history or because it’s widespread, we can’t necessarily call something incorrect just because someone invented a rule saying so.

I could invent a rule saying that you shouldn’t ever use the word sofa because we already have the perfectly good word couch, but you would probably roll your eyes and say that’s stupid because there’s nothing wrong with the word sofa. Yet we give heed to a whole bunch of similarly arbitrary rules invented two or three hundred years ago. Why? Technically, they’re no more valid or logically sound than my rule.

So if there really is such a thing as correctness in language, and if any argument about what should be considered correct or incorrect is technically a logical fallacy, then how can we arrive at any sort of understanding of, let alone agreement on, what’s correct?

This fundamental inability to argue logically about language is a serious problem, and it’s one that nobody has managed to solve or, in my opinion, ever will completely solve. This is why the war of the scriptivists rages on with no end in sight. We see the logical fallacies in our opponents’ arguments and the flawed assumptions underlying them, but we don’t acknowledge—or sometimes even see—the problems with our own. Even if we did, what could we do about them?

My best attempt at an answer is that both sides simply have to learn from each other. Language is a democracy, true, but, just like the American government, it is not a pure democracy. Some people—including editors, writers, English teachers, and usage commentators—have a disproportionate amount of influence. Their opinions carry more weight because people care what they think.

This may be inherently elitist, but it is not necessarily a bad thing. We naturally trust the opinions of those who know the most about a subject. If your car won’t start, you take it to a mechanic. If your tooth hurts, you go to the dentist. If your writing has problems, you ask an editor.

Granted, using lay for lie is not bad in the same sense that a dead starter motor or an abscessed tooth is bad: it’s a problem only in the sense that some judge it to be wrong. Using lay for lie is perfectly comprehensible, and it doesn’t violate some basic rule of English grammar such as word order. Furthermore, it won’t destroy the language. Just as we have pairs like lay and lie or sit and set, we used to have two words for hang, but nobody claims that we’ve lost a valuable distinction here by having one word for both transitive and intransitive uses.

Prescriptivists want you to know that people will judge you for your words (and—let’s be honest—usually they’re the ones doing the judging), and descriptivists want you to soften those judgements or even negate them by injecting them with a healthy dose of facts. That is, there are two potential fixes for the problem of using words or constructions that will cause people to judge you: stop using that word or construction, or get people to stop judging you and others for that use.

In reality, we all use both approaches, and, more importantly, we need both approaches. Even most dyed-in-the-wool prescriptivists will tell you that the rule banning split infinitives is bogus, and even most liberal descriptivists will acknowledge that if you want to be taken seriously, you need to use Standard English and avoid major errors. Problems occur when you take a completely one-sided approach, insisting either that something is an error even if almost everyone does it or that something isn’t an error even though almost everyone rejects it. In other words, good usage advice has to consider not only the facts of usage but speakers’ opinions about usage.

For instance, you can recognize that irregardless is a word, and you can even argue that there’s nothing technically wrong with it because nobody cares that the verbs bone and debone mean the same thing, but it would be irresponsible not to mention that the word is widely considered an error in educated speech and writing. Remember that words and constructions are not inherently correct or incorrect and that mere use does not necessarily make something correct; correctness is a judgement made by speakers of the language. This means that, paradoxically, something can be in widespread use even among educated speakers and can still be considered an error.

This also means that on some disputed items, there may never be anything approaching consensus. While the facts of usage may be indisputable, opinions may still be divided. Thus it’s not always easy or even possible to label something as simply correct or incorrect. Even if language is a democracy, there is no simple majority rule, no up and down vote to determine whether something is correct. Something may be only marginally acceptable or correct only in certain situations or according to certain people.

But as in a democracy, it is important for people to be informed before metaphorically casting their vote. Bryan Garner argues in his Modern American Usage that what people want in language advice is authority, and he’s certainly willing to give it to you. But I think what people really need is information. For example, you can state authoritatively that regardless of past or present usage, singular they is a grammatical error and always will be, but this is really an argument, not a statement of fact. And like all arguments, it should be supported with evidence. An argument based solely or primarily on one author’s opinion—or even on many people’s opinions—will always be a weaker argument than one that considers both facts and opinion.

This doesn’t mean that you have to accept every usage that’s supported by evidence, nor does it mean that all evidence is created equal. We’re all human, we all still have opinions, and sometimes those opinions are in defiance of facts. For example, between you and I may be common even in educated speech, but I will probably never accept it, let alone like it. But I should not pretend that my opinion is fact, that my arguments are logically foolproof, or that I have any special authority to declare it wrong. I think the linguist Thomas Pyles said it best:

Too many of us . . . would seem to believe in an ideal English language, God-given instead of shaped and molded by man, somewhere off in a sort of linguistic stratosphere—a language which nobody actually speaks or writes but toward whose ineffable standards all should aspire. Some of us, however, have in our worst moments suspected that writers of handbooks of so-called “standard English usage” really know no more about what the English language ought to be than those who use it effectively and sometimes beautifully. In truth, I long ago arrived at such a conclusion: frankly, I do not believe that anyone knows what the language ought to be. What most of the authors of handbooks do know is what they want English to be, which does not interest me in the least except as an indication of the love of some professors for absolute and final authority.1”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.

In usage, as in so many other things, you have to learn to live with uncertainty.

Notes   [ + ]

1. ”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.


Why Descriptivists Are Usage Liberals

Outside of linguistics, the people who care most about language tend to be prescriptivists—editors, writers, English teachers, and so on—while linguists and lexicographers are descriptivists. “Descriptive, not prescriptive!” is practically the linguist rallying cry. But we linguists have done a terrible job of explaining just what that means and why it matters. As I tried to explain in “What Descriptivism Is and Isn’t”, descriptivism is essentially just an interest in facts. That is, we make observations about what the language is rather than state opinions about how we’d like it to be.

Descriptivism is often cast as the opposite of prescriptivism, but they aren’t opposites at all. But no matter how many times we insist that “descriptivism isn’t ‘anything goes’”, people continue to believe that we’re all grammatical anarchists and linguistic relativists, declaring everything correct and saying that there’s no such thing as a grammatical error.

Part of the problem is that whenever you conceive of two approaches as opposing points of view, people will assume that they’re opposite in every regard. Prescriptivists generally believe that communication is important, that having a standard form of the language facilitates communication, and that we need to uphold the rules to maintain the standard. And what people often see is that linguists continually tear down the rules and say that they don’t really matter. The natural conclusion for many people is that linguists don’t care about maintaining the standard or supporting good communication—they want a linguistic free-for-all instead. Then descriptivists appear to be hypocrites for using the very standard they allegedly despise.

It’s true that many descriptivists oppose rules that they disagree with, but as I’ve said before, this isn’t really descriptivism—it’s anti-prescriptivism, for lack of a better term. (Not because it’s the opposite of prescriptivism, but because it often prescribes the opposite of what traditional linguistic prescriptivism does.) Just ask yourself how an anti-prescriptive sentiment like “There’s nothing wrong with singular they” is a description of linguistic fact.

So if that’s not descriptivism, then why do so many linguists have such liberal views on usage? What does being against traditional rules have to do with studying language? And how can linguists oppose rules and still be in favor of good communication and Standard English?

The answer, in a nutshell, is that we don’t think that the traditional rules have much to do with either good communication or Standard English. The reason why we think that is a little more complicated.

Linguists have had a hard time defining just what Standard English is, but there are several ideas that recur in attempts to define it. First, although Standard English can certainly be spoken, it is often conceived of as a written variety, especially in the minds of non-linguists. Second, it is generally more formal, making it appropriate for a wide range of serious topics. Third, it is educated, or rather, it is used by educated speakers. Fourth, it is supraregional, meaning that it is not tied to a specific region, as most dialects are, but that it can be used across an entire language area. And fifth, it is careful or edited. Notions of uniformity and prestige are often thrown into the mix as well.

Careful is a vague term, but it means that users of Standard English put some care into what they say or write. This is especially true of most published writing; the entire profession of editing is dedicated to putting care into the written word. So it’s tempting to say that following the rules is an important part of Standard English and that tearing down those rules tears down at least that part of Standard English.

But the more important point is that Standard English is ultimately rooted in the usage of actual speakers and writers. It’s not just that there no legislative body declaring what’s standard, but that there are no first principles from which we can deduce what’s standard. All languages are different, and they change over time, so how can we know what’s right or wrong except by looking at the evidence? This is what descriptivists try to do when discussing usage: look at the evidence from historical and current usage and draw meaningful conclusions about what’s right or wrong. (There are some logical problems with this, but I’ll address those another time.)

Let’s take singular they, for example. The evidence shows that it’s been in use for centuries not just by common folk or educated speakers but by well-respected writers from Geoffrey Chaucer to Jane Austen. The evidence also shows that it’s used in fairly predictable ways, generally to refer to indefinite pronouns or to nouns that don’t specify gender. Its use has not caused the grammar of English to collapse, and it seems like a rather felicitous solution to the gender-neutral pronoun problem. So at least from a dispassionate linguistic point of view, there is no problem with it.

From another point of view, though, there is something wrong with it: some people don’t like it. This is a social rather than a linguistic fact, but it’s a fact nonetheless. But this social fact arose because at some point someone declared—contrary to the linguistic facts—that singular they is a grammatical error that should be avoided. Here’s where descriptivists depart from description and get into anti-prescription. If people have been taught to dislike this usage, it stands to reason that they could be taught to get over this dislike.

That is, linguists are engaging in anti-prescriptivism to counter the prescriptivism that isn’t rooted in linguistic fact. So when they debunk or tear down traditional rules, it’s not that they don’t value Standard English or good communication; it’s that they think that those particular rules have nothing to do with either.

To be fair, I think that many linguists think they’re still merely describing when they’re countering prescriptive attitudes. Saying that singular they has been used for centuries by respected writers, that it appears to follow fairly well-defined rules, and that the proscription against it is not based in linguistic fact is descriptive; saying that people need to get over their dislike and accept it is not.

And this is precisely why I think descriptivism and prescriptivism not only can but should coexist. It’s not wrong to have opinions on what’s right or wrong, but I think it’s better if those opinions have some basis in fact. Guidance on issues of usage can really only be relevant and valid if it takes all the evidence into account—who uses a certain word of construction, in what circumstances, and so on. These are all facts that can be investigated, and linguistics provides a solid methodological framework for doing so. Anything that ignores the facts reduces to one sort of ipse dixit or another, either a statement from an authority declaring something to be right or wrong or one’s own preferences or pet peeves.

Linguists value good communication, and we recognize the importance of Standard English. But our opinions on both are informed by our study of language and by our emphasis on facts and evidence. This isn’t “anything goes”, or at least no more so than language has always been. People have always worried about language change, but language has always turned out fine. Inventing new rules to try to regulate language will not save it from destruction, and tossing out the rules that have no basis in fact will not hasten the language’s demise. But recognizing that some rules don’t matter may alleviate some of those worries, and I think that’s a good thing for both camps.


Lynne Truss and Chicken Little

Lynne Truss, author of the bestselling Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation, is at it again, crying with her characteristic hyperbole and lack of perspective that the linguistic sky is falling because she got a minor bump on the head.

As usual, Truss hides behind the it’s-just-a-joke-but-no-seriously defense. She starts by claiming to have “an especially trivial linguistic point to make” but then claims that the English language is doomed, and it’s all linguists’ fault. According to Truss, linguists have sat back and watched while literacy levels have declined—and have profited from doing so.

What exactly is the problem this time? That some people mistakenly write some phrases as compound words when they’re not, such as maybe for may be or anyday for any day. (This isn’t even entirely true; anyday is almost nonexistent in print, even in American English, according to Google Ngram Viewer.) I guess from anyday it’s a short, slippery slope to complete language chaos, and then “we might as well all go off and kill ourselves.”

But it’s not clear what her complaint about erroneous compound words has to do with literacy levels. If the only problem with literacy is that some people write maybe when they mean may be, then it seems to be, as she originally says, an especially trivial point. Yes, some people deviate from standard orthography. While this may be irritating and may occasionally cause confusion, it’s not really an indication that people don’t know how to read or write. Even educated people make mistakes, and this has always been the case. It’s not a sign of impending doom.

But let’s consider the analogies she chose to illustrate linguists’ supposed negligence. She says that we’re like epidemiologists who simply catalog all the ways in which people die from diseases or like architects who make notes while buildings collapse. (Interestingly, she makes two remarks about how well paid linguists are. Of course, professors don’t actually make that much, especially those in the humanities or social sciences. And it smacks of hypocrisy from someone whose book has sold 3 million copies.)

Perhaps there is a minor crisis in literacy, at least in the UK. This article says that 16–24-year-olds in the UK are lagging behind many counterparts in other first-world countries. (The headline suggests that they’re trailing the entire world, but the study only looked at select countries from Europe and east Asia.) Wikipedia, however, says that the UK has a 99 percent literacy rate. Maybe young people are slipping a bit, and this is certainly something that educators should address, but it doesn’t appear that countless people are dying from an epidemic of slightly declining literacy rates or that our linguistic structures are collapsing. This is simply not the linguistic apocalypse that Truss makes it out to be.

Anyway, even if it were, why would it be linguists’ job to do something about it? Literacy is taught in primary and secondary school and is usually the responsibility of reading, language arts, or English teachers—not linguists. Why not criticize English professors for sitting back and collecting fat paychecks for writing about literary theory while our kids struggle to read? Because they’re not her ideological enemy, that’s why. Linguists often oppose language pedants like Truss, and so Truss finds some reason—contrived though it may be—to blame them. Though some applied linguists do in fact study things like language acquisition and literacy, most linguists hew to the more abstract and theoretical side of language—syntax, morphology, phonology, and so on. Blaming descriptive linguists for children’s illiteracy is like blaming physicists for children’s inability to ride bikes.

And maybe the real reason why linguists are unconcerned about the upcoming linguistic apocalypse is that there simply isn’t one. Maybe linguists are like meteorologists who observe that, contrary to the claims of some individuals, the sky is not actually falling. In studying the structure of other languages and the ways in which languages change, linguists have realized that language change is not decay. Consider the opening lines from Beowulf, an Old English epic poem over a thousand years old:

HWÆT, WE GAR-DEna in geardagum,
þeodcyninga þrym gefrunon,
hu ða æþelingas ellen fremedon!

Only two words are instantly recognizable to modern English speakers: we and in. The changes from Old English to modern English haven’t made the language better or worse—just different. Some people maintain that they understand that language changes but say that they still oppose certain changes that seem to come from ignorance or laziness. They fear that if we’re not vigilant in opposing such changes, we’ll lose our ability to communicate. But the truth is that most of those changes from Old English to modern English also came from ignorance or laziness, and we seem to communicate just fine today.

Languages can change very radically over time, but contrary to popular belief, they never devolve into caveman grunting. This is because we all have an interest in both understanding and being understood, and we’re flexible enough to adapt to changes that happen within our lifetime. And with language, as opposed to morality or ethics, there is no inherent right or wrong. Correct language is, in a nutshell, what its users consider to be correct for a given time, place, and audience. One generation’s ignorant change is sometimes the next generation’s proper grammar.

It’s no surprise that Truss fundamentally misunderstands what linguists and lexicographers do. She even admits that she was “seriously unqualified” for linguistic debate a few years back, and it seems that nothing has changed. But that probably won’t stop her from continuing to prophesy the imminent destruction of the English language. Maybe Truss is less like Chicken Little and more like the boy who cried wolf, proclaiming disaster not because she actually sees one coming, but rather because she likes the attention.


12 Mistakes Nearly Everyone Who Writes About Grammar Mistakes Makes

There are a lot of bad grammar posts in the world. These days, anyone with a blog and a bunch of pet peeves can crank out a click-bait listicle of supposed grammar errors. There’s just one problem—these articles are often full of mistakes of one sort or another themselves. Once you’ve read a few, you start noticing some patterns. Inspired by a recent post titled “Grammar Police: Twelve Mistakes Nearly Everyone Makes”, I decided to make a list of my own.

1. Confusing grammar with spelling, punctuation, and usage. Many people who write about grammar seem to think that grammar means “any sort of rule of language, especially writing”. But strictly speaking, grammar refers to the structural rules of language, namely morphology (basically the way words are formed from roots and affixes), phonology (the system of sounds in a language), and syntax (the way phrases and clauses are formed from words). Most complaints about grammar are really about punctuation, spelling (such as problems with you’re/your and other homophone confusion) or usage (which is often about semantics). This post, for instance, spends two of its twelve points on commas and a third on quotation marks.

2. Treating style choices as rules. This article says that you should always use an Oxford (or serial) comma (the comma before and or or in a list) and that quotation marks should always follow commas and periods, but the latter is true only in most American styles (linguists often put the commas and periods outside quotes, and so do many non-American styles), and the former is only true of some American styles. I may prefer serial commas, but I’m not going to insist that everyone who doesn’t use them is making a mistake. It’s simply a matter of style, and style varies from one publisher to the next.

3. Ignoring register. There’s a time and a place for following the rules, but the writers of these lists typically treat English as though it had only one register: formal writing. They ignore the fact that following the rules in the wrong setting often sounds stuffy and stilted. Formal written English is not the only legitimate form of the language, and the rules of formal written English don’t apply in all situations. Sure, it’s useful to know when to use who and whom, but it’s probably more useful to know that saying To whom did you give the book? in casual conversation will make you sound like a pompous twit.

4. Saying that a disliked word isn’t a word. You may hate irregardless (I do), but that doesn’t mean it’s not a word. If it has its own meaning and you can use it in a sentence, guess what—it’s a word. Flirgle, on the other hand, is not a word—it’s just a bunch of sounds that I strung together in word-like fashion. Irregardless and its ilk may not be appropriate for use in formal registers, and you certainly don’t have to like them, but as Stan Carey says, “‘Not a word’ is not an argument.”

5. Turning proposals into ironclad laws. This one happens more often than you think. A great many rules of grammar and usage started life as proposals that became codified as inviolable laws over the years. The popular that/which rule, which I’ve discussed at length before, began as a proposal—not “everyone gets this wrong” but “wouldn’t it be nice if we made a distinction here?” But nowadays people have forgotten that a century or so ago, this rule simply didn’t exist, and they say things like “This is one of the most common mistakes out there, and understandably so.” (Actually, no, you don’t understand why everyone gets this “wrong”, because you don’t realize that this rule is a relatively recent invention by usage commentators that some copy editors and others have decided to enforce.) It’s easy to criticize people for not following rules that you’ve made up.

6. Failing to discuss exceptions to rules. Invented usage rules often ignore the complexities of actual usage. Lists of rules such as these go a step further and often ignore the complexities of those rules. For example, even if you follow the that/which rule, you need to know that you can’t use that after a preposition or after the demonstrative pronoun that—you have to use a restrictive which. Likewise, the less/fewer rule is usually reduced to statements like “use fewer for things you can count”, which leads to ugly and unidiomatic constructions like “one fewer thing to worry about”. Affect and effect aren’t as simple as some people make them out to be, either; affect is usually a verb and effect a noun, but affect can also be a noun (with stress on the first syllable) referring to the outward manifestation of emotions, while effect can be a verb meaning to cause or to make happen. Sometimes dumbing down rules just makes them dumb.

7. Overestimating the frequency of errors. The writer of this list says that misuse of nauseous is “Undoubtedly the most common mistake I encounter.” This claim seems worth doubting to me; I can’t remember the last time I heard someone say “nauseous”. Even if you consider it a misuse, it’s got to rate pretty far down the list in terms of frequency. This is why linguists like to rely on data for testable claims—because people tend to fall prey to all kinds of cognitive biases such as the frequency illusion.

8. Believing that etymology is destiny. Words change meaning all the time—it’s just a natural and inevitable part of language. But some people get fixated on the original meanings of some words and believe that those are the only correct meanings. For example, they’ll say that you can only use decimate to mean “to destroy one in ten”. This may seem like a reasonable argument, but it quickly becomes untenable when you realize that almost every single word in the language has changed meaning at some point, and that’s just in the few thousand years in which language has been written or can be reconstructed. And sometimes a new meaning is more useful anyway (which is precisely why it displaced an old meaning). As Jan Freeman said, “We don’t especially need a term that means ‘kill one in 10.’”

9. Simply bungling the rules. If you’re going to chastise people for not following the rules, you should know those rules yourself and be able to explain them clearly. You may dislike singular they, for instance, but you should know that it’s not a case of subject-predicate disagreement, as the author of this list claims—it’s an issue of pronoun-antecedent agreement, which is not the same thing. This list says that “‘less’ is reserved for hypothetical quantities”, but this isn’t true either; it’s reserved for noncount nouns, singular count nouns, and plural count nouns that aren’t generally thought of as discrete entities. Use of less has nothing to do with being hypothetical. And this one says that punctuation always goes inside quotation marks. In most American styles, it’s only commas and periods that always go inside. Colons, semicolons, and dashes always go outside, and question marks and exclamation marks only go inside sometimes.

10. Saying that good grammar leads to good communication. Contrary to popular belief, bad grammar (even using the broad definition that includes usage, spelling, and punctuation) is not usually an impediment to communication. A sentence like Ain’t nobody got time for that is quite intelligible, even though it violates several rules of Standard English. The grammar and usage of nonstandard varieties of English are often radically different from Standard English, but different does not mean worse or less able to communicate. The biggest differences between Standard English and all its nonstandard varieties are that the former has been codified and that it is used in all registers, from casual conversation to formal writing. Many of the rules that these lists propagate are really more about signaling to the grammatical elite that you’re one of them—not that this is a bad thing, of course, but let’s not mistake it for something it’s not. In fact, claims about improving communication are often just a cover for the real purpose of these lists, which is . . .

11. Using grammar to put people down. This post sympathizes with someone who worries about being crucified by the grammar police and then says a few paragraphs later, “All hail the grammar police!” In other words, we like being able to crucify those who make mistakes. Then there are the put-downs about people’s education (“You’d think everyone learned this rule in fourth grade”) and more outright insults (“5 Grammar Mistakes that Make You Sound Like a Chimp”). After all, what’s the point in signaling that you’re one of the grammatical elite if you can’t take a few potshots at the ignorant masses?

12. Forgetting that correct usage ultimately comes from users. The disdain for the usage of common people is symptomatic of a larger problem: forgetting that correct usage ultimately comes from the people, not from editors, English teachers, or usage commentators. You’re certainly entitled to have your opinion about usage, but at some point you have to recognize that trying to fight the masses on a particular point of usage (especially if it’s a made-up rule) is like trying to fight the rising tide. Those who have invested in learning the rules naturally feel defensive of them and of the language in general, but you have no more right to the language than anyone else. You can be restrictive if you want and say that Standard English is based on the formal usage of educated writers, but any standard that is based on a set of rules that are simply invented and passed down is ultimately untenable.

And a bonus mistake:

13. Making mistakes themselves. It happens to the best of us. The act of making grammar or spelling mistakes in the course of pointing out someone else’s mistakes even has a name, Muphry’s law. This post probably has its fair share of typos. (If you spot one, feel free to point it out—politely!—in the comments.)

This post also appears on Huffington Post.


My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.


Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.


What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella”, the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot”, “A Bad Week for Joan Acocella”, “Daddy, Are Prescriptivists Real?”, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.


Rules, Evidence, and Grammar

In case you haven’t heard, it’s National Grammar Day, and that seemed as good a time as any to reflect a little on the role of evidence in discussing grammar rules. (Goofy at Bradshaw of the Future apparently had the same idea.) A couple of months ago, Geoffrey Pullum made the argument in this post on Lingua Franca that it’s impossible to talk about what’s right or wrong in language without considering the evidence. Is singular they grammatical and standard? How do you know?

For most people, I think, the answer is pretty simple: you look it up in a source that you trust. If the source says it’s grammatical or correct, it is. If it doesn’t, it isn’t. Singular they is wrong because many authoritative sources say it is. End of story. And if you try to argue that the sources aren’t valid or reliable, you’re labelled an anything-goes type who believes we should just toss all the rules out the window and embrace linguistic anarchy.

The question is, where did these sources get their authority to say what’s right and wrong?

That is, when someone says that you should never use they as a singular pronoun or start a sentence with hopefully or use less with count nouns, why do you suppose that the rules they put forth are valid? The rules obviously haven’t been inscribed on stone tablets by the finger of the Lord, but they have to come from somewhere. Every language is different, and languages and constantly changing, so I think we have to recognize that there is no universal, objective truth when it comes to grammar and usage.

David Foster Wallace apparently fell into the trap of thinking that there was, unfortunately. In his famous Harper’s article “Tense Present: Democracy, English, and the Wars over Usage,” he quotes the introduction to The American College Dictionary, which says, “A dictionary can be an ‘authority’ only in the sense in which a book of chemistry or of physics or of botany can be an ‘authority’: by the accuracy and the completeness of its record of the observed facts of the field examined, in accord with the latest principles and techniques of the particular science.”

He retorts,

This is so stupid it practically drools. An “authoritative” physics text presents the results of physicists’ observations and physicists’ theories about those observations. If a physics textbook operated on Descriptivist principles, the fact that some Americans believe that electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Theory to be included as a “valid” theory in the textbook—just as, for Dr. Fries, if some Americans use infer for imply, the use becomes an ipso facto “valid” part of the language.

The irony of his first sentence is almost overwhelming. Physics is a set of universal laws that can be observed and tested, and electricity works regardless of what anyone believes. Language, on the other hand, is quite different. In fact, Wallace tacitly acknowledges the difference—without explaining his apparent contradiction—immediately after: “It isn’t scientific phenomena they’re tabulating but rather a set of human behaviors, and a lot of human behaviors are—to be blunt—moronic. Try, for instance, to imagine an ‘authoritative’ ethics textbook whose principles were based on what most people actually do.”1David Foster Wallace, “Tense Present: Democracy, English, and the Wars over Usage,” Harper’s Monthly, April 2001, 47.

Now here he hits on an interesting question. Any argument about right or wrong in language ultimately comes down to one of two options: it’s wrong because it’s absolutely, objectively wrong, or it’s wrong because arbitrary societal convention says it’s wrong. The former is untenable, but the latter doesn’t give us any straightforward answers. If there is no objective truth in usage, then how do we know what’s right and wrong?

Wallace tries to make the argument about ethics; sloppy language leads to real problems like people accidentally eating poison mushrooms. But look at his gargantuan list of peeves and shibboleths on the first page of the article. How many of them lead to real ethical problems? Does singular they pose any kind of ethical problem? What about sentential hopefully or less with count nouns? I don’t think so.

So if there’s no ethical problem with disputed usage, then we’re still left with the question, what makes it wrong? Here we get back to Pullum’s attempt to answer the question: let’s look at the evidence. And, because we can admit, like Wallace, that some people’s behavior is moronic, let’s limit ourselves to looking at the evidence from those speakers and writers whose language can be said to be most standard. What we find even then is that a lot of the usage and grammar rules that have been put forth, from Bishop Robert Lowth to Strunk and White to Bryan Garner, don’t jibe with actual usage.

Edward Finegan seizes on this discrepancy in an article a few years back. In discussing sentential hopefully, he quotes Garner as saying that it is “all but ubiquitous—even in legal print. Even so, the word received so much negative attention in the 1970s and 1980s that many writers have blacklisted it, so using it at all today is a precarious venture. Indeed, careful writers and speakers avoid the word even in its traditional sense, for they’re likely to be misunderstood if they use it in the old sense”2”Bryan A. Garner, A Dictionary of Modern Legal Usage, 2nd ed. (New York: Oxford University Press, 1995). Finegan says, “I could not help but wonder how a reflective and careful analyst could concede that hopefully is all but ubiquitous in legal print and claim in the same breath that careful writers and speakers avoid using it.”3Edward Finegan, “Linguistic Prescription: Familiar Practices and New Perspectives,” Annual Review of Applied Linguistics (2003) 23, 216.

The problem when you start questioning the received wisdom on grammar and usage is that you make a lot of people very angry. In a recent conversation on Twitter, Mignon Fogarty, aka Grammar Girl, said, “You would not believe (or maybe you would) how much grief I’m getting for saying ‘data’ can sometimes be singular.” I responded, “Sadly, I can. For some people, grammar is more about cherished beliefs than facts, and they don’t like having them challenged.” They don’t want to hear arguments about authority and evidence and deriving rules from what educated speakers actually use. They want to believe that there’s some deeper truths that justify their preferences and peeves, and that’s probably not going to change anytime soon. But for now, I’ll keep trying.

Notes   [ + ]

1. David Foster Wallace, “Tense Present: Democracy, English, and the Wars over Usage,” Harper’s Monthly, April 2001, 47.
2. ”Bryan A. Garner, A Dictionary of Modern Legal Usage, 2nd ed. (New York: Oxford University Press, 1995).
3. Edward Finegan, “Linguistic Prescription: Familiar Practices and New Perspectives,” Annual Review of Applied Linguistics (2003) 23, 216.
%d bloggers like this: