Arrant Pedantry

By

Lynne Truss and Chicken Little

Lynne Truss, author of the bestselling Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation, is at it again, crying with her characteristic hyperbole and lack of perspective that the linguistic sky is falling because she got a minor bump on the head.

As usual, Truss hides behind the it’s-just-a-joke-but-no-seriously defense. She starts by claiming to have “an especially trivial linguistic point to make” but then claims that the English language is doomed, and it’s all linguists’ fault. According to Truss, linguists have sat back and watched while literacy levels have declined—and have profited from doing so.

What exactly is the problem this time? That some people mistakenly write some phrases as compound words when they’re not, such as maybe for may be or anyday for any day. (This isn’t even entirely true; anyday is almost nonexistent in print, even in American English, according to Google Ngram Viewer.) I guess from anyday it’s a short, slippery slope to complete language chaos, and then “we might as well all go off and kill ourselves.”

But it’s not clear what her complaint about erroneous compound words has to do with literacy levels. If the only problem with literacy is that some people write maybe when they mean may be, then it seems to be, as she originally says, an especially trivial point. Yes, some people deviate from standard orthography. While this may be irritating and may occasionally cause confusion, it’s not really an indication that people don’t know how to read or write. Even educated people make mistakes, and this has always been the case. It’s not a sign of impending doom.

But let’s consider the analogies she chose to illustrate linguists’ supposed negligence. She says that we’re like epidemiologists who simply catalog all the ways in which people die from diseases or like architects who make notes while buildings collapse. (Interestingly, she makes two remarks about how well paid linguists are. Of course, professors don’t actually make that much, especially those in the humanities or social sciences. And it smacks of hypocrisy from someone whose book has sold 3 million copies.)

Perhaps there is a minor crisis in literacy, at least in the UK. This article says that 16–24-year-olds in the UK are lagging behind many counterparts in other first-world countries. (The headline suggests that they’re trailing the entire world, but the study only looked at select countries from Europe and east Asia.) Wikipedia, however, says that the UK has a 99 percent literacy rate. Maybe young people are slipping a bit, and this is certainly something that educators should address, but it doesn’t appear that countless people are dying from an epidemic of slightly declining literacy rates or that our linguistic structures are collapsing. This is simply not the linguistic apocalypse that Truss makes it out to be.

Anyway, even if it were, why would it be linguists’ job to do something about it? Literacy is taught in primary and secondary school and is usually the responsibility of reading, language arts, or English teachers—not linguists. Why not criticize English professors for sitting back and collecting fat paychecks for writing about literary theory while our kids struggle to read? Because they’re not her ideological enemy, that’s why. Linguists often oppose language pedants like Truss, and so Truss finds some reason—contrived though it may be—to blame them. Though some applied linguists do in fact study things like language acquisition and literacy, most linguists hew to the more abstract and theoretical side of language—syntax, morphology, phonology, and so on. Blaming descriptive linguists for children’s illiteracy is like blaming physicists for children’s inability to ride bikes.

And maybe the real reason why linguists are unconcerned about the upcoming linguistic apocalypse is that there simply isn’t one. Maybe linguists are like meteorologists who observe that, contrary to the claims of some individuals, the sky is not actually falling. In studying the structure of other languages and the ways in which languages change, linguists have realized that language change is not decay. Consider the opening lines from Beowulf, an Old English epic poem over a thousand years old:

HWÆT, WE GAR-DEna in geardagum,
þeodcyninga þrym gefrunon,
hu ða æþelingas ellen fremedon!

Only two words are instantly recognizable to modern English speakers: we and in. The changes from Old English to modern English haven’t made the language better or worse—just different. Some people maintain that they understand that language changes but say that they still oppose certain changes that seem to come from ignorance or laziness. They fear that if we’re not vigilant in opposing such changes, we’ll lose our ability to communicate. But the truth is that most of those changes from Old English to modern English also came from ignorance or laziness, and we seem to communicate just fine today.

Languages can change very radically over time, but contrary to popular belief, they never devolve into caveman grunting. This is because we all have an interest in both understanding and being understood, and we’re flexible enough to adapt to changes that happen within our lifetime. And with language, as opposed to morality or ethics, there is no inherent right or wrong. Correct language is, in a nutshell, what its users consider to be correct for a given time, place, and audience. One generation’s ignorant change is sometimes the next generation’s proper grammar.

It’s no surprise that Truss fundamentally misunderstands what linguists and lexicographers do. She even admits that she was “seriously unqualified” for linguistic debate a few years back, and it seems that nothing has changed. But that probably won’t stop her from continuing to prophesy the imminent destruction of the English language. Maybe Truss is less like Chicken Little and more like the boy who cried wolf, proclaiming disaster not because she actually sees one coming, but rather because she likes the attention.

By

12 Mistakes Nearly Everyone Who Writes About Grammar Mistakes Makes

There are a lot of bad grammar posts in the world. These days, anyone with a blog and a bunch of pet peeves can crank out a click-bait listicle of supposed grammar errors. There’s just one problem—these articles are often full of mistakes of one sort or another themselves. Once you’ve read a few, you start noticing some patterns. Inspired by a recent post titled “Grammar Police: Twelve Mistakes Nearly Everyone Makes”, I decided to make a list of my own.

1. Confusing grammar with spelling, punctuation, and usage. Many people who write about grammar seem to think that grammar means “any sort of rule of language, especially writing”. But strictly speaking, grammar refers to the structural rules of language, namely morphology (basically the way words are formed from roots and affixes), phonology (the system of sounds in a language), and syntax (the way phrases and clauses are formed from words). Most complaints about grammar are really about punctuation, spelling (such as problems with you’re/your and other homophone confusion) or usage (which is often about semantics). This post, for instance, spends two of its twelve points on commas and a third on quotation marks.

2. Treating style choices as rules. This article says that you should always use an Oxford (or serial) comma (the comma before and or or in a list) and that quotation marks should always follow commas and periods, but the latter is true only in most American styles (linguists often put the commas and periods outside quotes, and so do many non-American styles), and the former is only true of some American styles. I may prefer serial commas, but I’m not going to insist that everyone who doesn’t use them is making a mistake. It’s simply a matter of style, and style varies from one publisher to the next.

3. Ignoring register. There’s a time and a place for following the rules, but the writers of these lists typically treat English as though it had only one register: formal writing. They ignore the fact that following the rules in the wrong setting often sounds stuffy and stilted. Formal written English is not the only legitimate form of the language, and the rules of formal written English don’t apply in all situations. Sure, it’s useful to know when to use who and whom, but it’s probably more useful to know that saying To whom did you give the book? in casual conversation will make you sound like a pompous twit.

4. Saying that a disliked word isn’t a word. You may hate irregardless (I do), but that doesn’t mean it’s not a word. If it has its own meaning and you can use it in a sentence, guess what—it’s a word. Flirgle, on the other hand, is not a word—it’s just a bunch of sounds that I strung together in word-like fashion. Irregardless and its ilk may not be appropriate for use in formal registers, and you certainly don’t have to like them, but as Stan Carey says, “‘Not a word’ is not an argument.”

5. Turning proposals into ironclad laws. This one happens more often than you think. A great many rules of grammar and usage started life as proposals that became codified as inviolable laws over the years. The popular that/which rule, which I’ve discussed at length before, began as a proposal—not “everyone gets this wrong” but “wouldn’t it be nice if we made a distinction here?” But nowadays people have forgotten that a century or so ago, this rule simply didn’t exist, and they say things like “This is one of the most common mistakes out there, and understandably so.” (Actually, no, you don’t understand why everyone gets this “wrong”, because you don’t realize that this rule is a relatively recent invention by usage commentators that some copy editors and others have decided to enforce.) It’s easy to criticize people for not following rules that you’ve made up.

6. Failing to discuss exceptions to rules. Invented usage rules often ignore the complexities of actual usage. Lists of rules such as these go a step further and often ignore the complexities of those rules. For example, even if you follow the that/which rule, you need to know that you can’t use that after a preposition or after the demonstrative pronoun that—you have to use a restrictive which. Likewise, the less/fewer rule is usually reduced to statements like “use fewer for things you can count”, which leads to ugly and unidiomatic constructions like “one fewer thing to worry about”. Affect and effect aren’t as simple as some people make them out to be, either; affect is usually a verb and effect a noun, but affect can also be a noun (with stress on the first syllable) referring to the outward manifestation of emotions, while effect can be a verb meaning to cause or to make happen. Sometimes dumbing down rules just makes them dumb.

7. Overestimating the frequency of errors. The writer of this list says that misuse of nauseous is “Undoubtedly the most common mistake I encounter.” This claim seems worth doubting to me; I can’t remember the last time I heard someone say “nauseous”. Even if you consider it a misuse, it’s got to rate pretty far down the list in terms of frequency. This is why linguists like to rely on data for testable claims—because people tend to fall prey to all kinds of cognitive biases such as the frequency illusion.

8. Believing that etymology is destiny. Words change meaning all the time—it’s just a natural and inevitable part of language. But some people get fixated on the original meanings of some words and believe that those are the only correct meanings. For example, they’ll say that you can only use decimate to mean “to destroy one in ten”. This may seem like a reasonable argument, but it quickly becomes untenable when you realize that almost every single word in the language has changed meaning at some point, and that’s just in the few thousand years in which language has been written or can be reconstructed. And sometimes a new meaning is more useful anyway (which is precisely why it displaced an old meaning). As Jan Freeman said, “We don’t especially need a term that means ‘kill one in 10.'”

9. Simply bungling the rules. If you’re going to chastise people for not following the rules, you should know those rules yourself and be able to explain them clearly. You may dislike singular they, for instance, but you should know that it’s not a case of subject-predicate disagreement, as the author of this list claims—it’s an issue of pronoun-antecedent agreement, which is not the same thing. This list says that “‘less’ is reserved for hypothetical quantities”, but this isn’t true either; it’s reserved for noncount nouns, singular count nouns, and plural count nouns that aren’t generally thought of as discrete entities. Use of less has nothing to do with being hypothetical. And this one says that punctuation always goes inside quotation marks. In most American styles, it’s only commas and periods that always go inside. Colons, semicolons, and dashes always go outside, and question marks and exclamation marks only go inside sometimes.

10. Saying that good grammar leads to good communication. Contrary to popular belief, bad grammar (even using the broad definition that includes usage, spelling, and punctuation) is not usually an impediment to communication. A sentence like Ain’t nobody got time for that is quite intelligible, even though it violates several rules of Standard English. The grammar and usage of nonstandard varieties of English are often radically different from Standard English, but different does not mean worse or less able to communicate. The biggest differences between Standard English and all its nonstandard varieties are that the former has been codified and that it is used in all registers, from casual conversation to formal writing. Many of the rules that these lists propagate are really more about signaling to the grammatical elite that you’re one of them—not that this is a bad thing, of course, but let’s not mistake it for something it’s not. In fact, claims about improving communication are often just a cover for the real purpose of these lists, which is . . .

11. Using grammar to put people down. This post sympathizes with someone who worries about being crucified by the grammar police and then says a few paragraphs later, “All hail the grammar police!” In other words, we like being able to crucify those who make mistakes. Then there are the put-downs about people’s education (“You’d think everyone learned this rule in fourth grade”) and more outright insults (“5 Grammar Mistakes that Make You Sound Like a Chimp”). After all, what’s the point in signaling that you’re one of the grammatical elite if you can’t take a few potshots at the ignorant masses?

12. Forgetting that correct usage ultimately comes from users. The disdain for the usage of common people is symptomatic of a larger problem: forgetting that correct usage ultimately comes from the people, not from editors, English teachers, or usage commentators. You’re certainly entitled to have your opinion about usage, but at some point you have to recognize that trying to fight the masses on a particular point of usage (especially if it’s a made-up rule) is like trying to fight the rising tide. Those who have invested in learning the rules naturally feel defensive of them and of the language in general, but you have no more right to the language than anyone else. You can be restrictive if you want and say that Standard English is based on the formal usage of educated writers, but any standard that is based on a set of rules that are simply invented and passed down is ultimately untenable.

And a bonus mistake:

13. Making mistakes themselves. It happens to the best of us. The act of making grammar or spelling mistakes in the course of pointing out someone else’s mistakes even has a name, Muphry’s law. This post probably has its fair share of typos. (If you spot one, feel free to point it out—politely!—in the comments.)

This post also appears on Huffington Post.

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.

By

What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella,” the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot“, “A Bad Week for Joan Acocella“, “Daddy, Are Prescriptivists Real?“, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.

By

Rules, Evidence, and Grammar

In case you haven’t heard, it’s National Grammar Day, and that seemed as good a time as any to reflect a little on the role of evidence in discussing grammar rules. (Goofy at Bradshaw of the Future apparently had the same idea.) A couple of months ago, Geoffrey Pullum made the argument in this post on Lingua Franca that it’s impossible to talk about what’s right or wrong in language without considering the evidence. Is singular they grammatical and standard? How do you know?

For most people, I think, the answer is pretty simple: you look it up in a source that you trust. If the source says it’s grammatical or correct, it is. If it doesn’t, it isn’t. Singular they is wrong because many authoritative sources say it is. End of story. And if you try to argue that the sources aren’t valid or reliable, you’re labelled an anything-goes type who believes we should just toss all the rules out the window and embrace linguistic anarchy.

The question is, where did these sources get their authority to say what’s right and wrong?

That is, when someone says that you should never use they as a singular pronoun or start a sentence with hopefully or use less with count nouns, why do you suppose that the rules they put forth are valid? The rules obviously haven’t been inscribed on stone tablets by the finger of the Lord, but they have to come from somewhere. Every language is different, and languages and constantly changing, so I think we have to recognize that there is no universal, objective truth when it comes to grammar and usage.

David Foster Wallace apparently fell into the trap of thinking that there was, unfortunately. In his famous Harper’s article “Tense Present: Democracy, English, and the Wars over Usage,” he quotes the introduction to The American College Dictionary, which says, “A dictionary can be an “authority” only in the sense in which a book of chemistry or of physics or of botany can be an “authority”: by the accuracy and the completeness of its record of the observed facts of the field examined, in accord with the latest principles and techniques of the particular science.”

He retorts,

This is so stupid it practically drools. An “authoritative” physics text presents the results of physicists’ observations and physicists’ theories about those observations. If a physics textbook operated on Descriptivist principles, the fact that some Americans believe that electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Theory to be included as a “valid” theory in the textbook—just as, for Dr. Fries, if some Americans use infer for imply, the use becomes an ipso facto “valid” part of the language.

The irony of his first sentence is almost overwhelming. Physics is a set of universal laws that can be observed and tested, and electricity works regardless of what anyone believes. Language, on the other hand, is quite different. In fact, Wallace tacitly acknowledges the difference—without explaining his apparent contradiction—immediately after: “It isn’t scientific phenomena they’re tabulating but rather a set of human behaviors, and a lot of human behaviors are—to be blunt—moronic. Try, for instance, to imagine an ‘authoritative’ ethics textbook whose principles were based on what most people actually do.”[1]

Now here he hits on an interesting question. Any argument about right or wrong in language ultimately comes down to one of two options: it’s wrong because it’s absolutely, objectively wrong, or it’s wrong because arbitrary societal convention says it’s wrong. The former is untenable, but the latter doesn’t give us any straightforward answers. If there is no objective truth in usage, then how do we know what’s right and wrong?

Wallace tries to make the argument about ethics; sloppy language leads to real problems like people accidentally eating poison mushrooms. But look at his gargantuan list of peeves and shibboleths on the first page of the article. How many of them lead to real ethical problems? Does singular they pose any kind of ethical problem? What about sentential hopefully or less with count nouns? I don’t think so.

So if there’s no ethical problem with disputed usage, then we’re still left with the question, what makes it wrong? Here we get back to Pullum’s attempt to answer the question: let’s look at the evidence. And, because we can admit, like Wallace, that some people’s behavior is moronic, let’s limit ourselves to looking at the evidence from those speakers and writers whose language can be said to be most standard. What we find even then is that a lot of the usage and grammar rules that have been put forth, from Bishop Robert Lowth to Strunk and White to Bryan Garner, don’t jibe with actual usage.

Edward Finegan seizes on this discrepancy in an article a few years back. In discussing sentential hopefully, he quotes Garner as saying that it is “all but ubiquitous—even in legal print. Even so, the word received so much negative attention in the 1970s and 1980s that many writers have blacklisted it, so using it at all today is a precarious venture. Indeed, careful writers and speakers avoid the word even in its traditional sense, for they’re likely to be misunderstood if they use it in the old sense”[2] Finegan says, “I could not help but wonder how a reflective and careful analyst could concede that hopefully is all but ubiquitous in legal print and claim in the same breath that careful writers and speakers avoid using it.”[3]

The problem when you start questioning the received wisdom on grammar and usage is that you make a lot of people very angry. In a recent conversation on Twitter, Mignon Fogarty, aka Grammar Girl, said, “You would not believe (or maybe you would) how much grief I’m getting for saying ‘data’ can sometimes be singular.” I responded, “Sadly, I can. For some people, grammar is more about cherished beliefs than facts, and they don’t like having them challenged.” They don’t want to hear arguments about authority and evidence and deriving rules from what educated speakers actually use. They want to believe that there’s some deeper truths that justify their preferences and peeves, and that’s probably not going to change anytime soon. But for now, I’ll keep trying.

  1. [1] David Foster Wallace, “Tense Present: Democracy, English, and the Wars over Usage,” Harper’s Monthly, April 2001, 47.
  2. [2] “Bryan A. Garner, A Dictionary of Modern Legal Usage, 2nd ed. (New York: Oxford University Press, 1995).
  3. [3] Edward Finegan, “Linguistic Prescription: Familiar Practices and New Perspectives,” Annual Review of Applied Linguistics (2003) 23, 216.

By

More on That

As I said in my last post, I don’t think the distribution of that and which is adequately explained by the restrictive/nonrestrictive distinction. It’s true that nearly all thats are restrictive (with a few rare exceptions), but it’s not true that all restrictive relative pronouns are thats and that all whiches are nonrestrictive, even when you follow the traditional rule. In some cases that is strictly forbidden, and in other cases it is disfavored to varying degrees. Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in your analysis.

In researching the topic for this post, I’ve learned a couple of things: (1) I don’t know syntax as well as I should, and (2) the behavior of relatives in English, particularly that, is far more complex than most editors or pop grammarians realize. First of all, there’s apparently been a century-long argument over whether that is even a relative pronoun or actually some sort of relativizing conjunction or particle. (Some linguists seem to prefer the latter, but I won’t wade too deep into that debate.) Previous studies have looked at multiple factors to explain the variation in relativizers, including the animacy of the referent, the distance between the pronoun and its referent, the semantic role of the relative clause, and the syntactic role of the referent.

It’s often noted that that can’t follow a preposition and that it doesn’t have a genitive form of its own (it must use either whose or of which), but no usage guide I’ve seen ever makes mention of the fact that this pattern follows the accessibility hierarchy. That is, in a cross-linguistic analysis, linguists have found an order to the way in which relative clauses are formed. Some languages can only relativize subjects, others can do subjects and verbal objects, yet others can do subjects, verbal objects, and oblique objects (like the objects of prepositions), and so on. For any allowable position on the hierarchy, all positions to the left are also allowable. The hierarchy goes something like this:

subject ≥ direct object ≥ indirect object ≥ object of stranded preposition ≥ object of fronted preposition ≥ possessor noun phrase ≥ object of comparative particle

What is interesting is that that and the wh- relatives, who and which, occupy overlapping but different portions of the hierarchy. Who and which can relativize anything from subjects to possessors and possibly objects of comparative particles, though whose as the genitive form of which seems a little odd to some, and both sound odd if not outright ungrammatical with comparatives, as in The man than who I’m taller. But that can’t relativize objects of fronted prepositions or anything further down the scale.

Strangely, though, there are things that that can do that who and which can’t. That can sometimes function as a sort of relative adverb, equivalent to the relative adverbs why, where, or when or to which with a preposition. That is, you can say The day that we met, The day when we met, or The day on which we met, but not The day which we met. And which can relativize whole clauses (though some sticklers consider this ungrammatical), while that cannot, as in This author uses restrictive “which,” which bothers me a lot.

So what explains the differences between that and which or who? Well, as I mentioned above, some linguists consider that not a pronoun but a complementizer or conjunction (perhaps a highly pronominal one), making it more akin to the complementizer that, as in He said that relativizers were confusing. And some linguists have also proposed different syntactic structures for restrictive and nonrestrictive clauses, which could account for the limitation of that to restrictive clauses. If that is not a true pronoun but a complementizer, then that could account for its strange distribution. It can’t appear in nonrestrictive clauses, because they require a full pronoun like which or who, and it can’t appear after prepositions, because those constructions similarly require a pronoun. But it can function as a relative adverb, which a regular relative pronoun can’t do.

As I argued in my previous post, it seems that which and that do not occupy separate parts of a single paradigm but are part of two different paradigms that overlap. The differences between them can be characterized in a few different ways, but for some reason, grammarians have seized on the restrictive/nonrestrictive distinction and have written off the rest as idiosyncratic exceptions to the rule or as common errors (when they’ve addressed those points at all).

The proposal to disallow which in restrictive relative clauses, except in the cases where that is ungrammatical—sometimes called Fowler’s rule, though that’s not entirely accurate—is based on the rather trivial observation that all thats are restrictive and that all nonrestrictives are which. It then assumes that the converse is true (or should be) and tries to force all restrictives to be that and all whiches to be nonrestrictive (except for all those pesky exceptions, of course).

Garner calls Fowler’s rule “nothing short of brilliant,”[1] but I must disagree. It’s based on a rather facile analysis followed by some terrible logical leaps. And insisting on following a rule based on bad linguistic analysis is not only not helpful to the reader, it’s a waste of editors’ time. As my last post shows, editors have obviously worked very hard to put the rule into practice, but this is not evidence of its utility, let alone its brilliance. But a linguistic analysis that could account for all of the various differences between the two systems of relativization in English? Now that just might be brilliant.

Sources

Herbert F. W. Stahlke, “Which That,” Language 52, no. 3 (Sept. 1976): 584–610
Johan Van Der Auwera, “Relative That: A Centennial Dispute,” Journal of Lingusitics 21, no. 1 (March 1985): 149–79
Gregory R. Guy and Robert Bayley, “On the Choice of Relative Pronouns in English,” American Speech 70, no. 2 (Summer 1995): 148–62
Nigel Fabb, “The Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 26, no. 1 (March 1990): 57–77
Robert D. Borsley, “More on the Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 28, no. 1 (March 1992), 139–48

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”

By

Rules, Regularity, and Relative Pronouns

The other day I was thinking about relative pronouns and how they get so much attention from usage commentators, and I decided I should write a post about them. I was beaten to the punch by Stan Carey, but that’s okay, because I think I’m going to take it in a somewhat different direction. (And anyway, great minds think alike, right? But maybe you should read his post first, along with my previous post on who and that, if you haven’t already.)

I’m not just talking about that and which but also who, whom, and whose, which is technically a relative possessive adjective. Judging by how often relative pronouns are talked about, you’d assume that most English speakers can’t get them right, even though they’re among the most common words in the language. In fact, in my own research for my thesis, I’ve found that they’re among the most frequent corrections made by copy editors.

So what gives? Why are they so hard for English speakers to get right? The distinctions are pretty clear-cut and can be found in a great many usage and writing handbooks. Some commentators even judgementally declare, “There’s a useful distinction here, and it’s lazy or perverse to pretend otherwise.” But is it really useful, and is it really lazy and perverse to disagree? Or is it perverse to try to inflict a bunch of arbitrary distinctions on speakers and writers?

And arbitrary they are. Many commentators act as if the proposed distinctions between all these words would make things tidier and more regular, but in fact it makes the whole system much more complicated. On the one hand, we have the restrictive/nonrestrictive distinction between that and which. On the other hand, we have the animate/inanimate (or human/nonhuman, if you want to be really strict) distinction between who and that/which. And on the other other hand, there’s the subject/object distinction between who and whom. But there’s no subject/object distinction with that or which, except when it’s the object of a preposition—then you have to use which, unless the preposition is stranded, in which case you can use that. And on the final hand, some people have proscribed whose as an inanimate or nonhuman relative possessive adjective, recommending constructions with of which instead, though this rule isn’t as popular, or at least not as frequently talked about, as the others. (How many hands is that? I’ve lost count.)

Simple, right? To make it all a little clear, I’ve even put it into a nice little table.

The proposed relative pronoun system

This is, in a nutshell, a very lopsided and unusual system. In a comment on my who/that post, Elaine Chaika says, “No natural grammar rule would work that way. Ever.” I’m not entirely convinced of that, because languages can be surprising in the unusual distinctions they make, but I agree that it is at the least typologically unusual.

“But we have to have rules!” you say. “If we don’t, we’ll have confusion!” But we do have rules—just not the ones that are proposed and promoted. The system we really have, in absence of the prescriptions, is basically a distinction between animate who and inanimate which with that overlaying the two. Which doesn’t make distinctions by case, but who(m) does, though this distinction is moribund and has probably only been kept alive by the efforts of schoolteachers and editors.

Whom is still pretty much required when it immediately follows a preposition, but not when the preposition is stranded. Since preposition stranding is extremely common in speech and increasingly common in writing, we’re seeing less and less of whom in this position. Whose is still a little iffy with inanimate referents, as in The house whose roof blew off, but many people say this is alright. Others prefer of which, though this can be awkward: The house the roof of which blew off.

That is either animate or inanimate—only who/which make that distinction—and can be either subject or object but cannot follow a preposition or function as a possessive adjective or nonrestrictively. If the preposition is stranded, as in The man that I gave the apple to, then it’s still allowed. But there’s no possessive thats, so you have to use whose of of which. Again, it’s clearer in table form:

The natural system of relative pronouns

The linguist Jonathan Hope wrote that several distinguishing features of Standard English give it “a typologically unusual structure, while non-standard English dialects follow the path of linguistic naturalness.” He then muses on the reason for this:

One explanation for this might be that as speakers make the choices that will result in standardisation, they unconsciously tend towards more complex structures, because of their sense of the prestige and difference of formal written language. Standard English would then become a ‘deliberately’ difficult language, constructed, albeit unconsciously, from elements that go against linguistic naturalness, and which would not survive in a ‘natural’ linguistic environment.[1]

It’s always tricky territory when you speculate on people’s unconscious motivations, but I think he’s on to something. Note that while the prescriptions make for a very asymmetrical system, the system that people naturally use is moving towards a very tidy and symmetrical distribution, though there are still a couple of wrinkles that are being worked out.

But the important point is that people already follow rules—just not the ones that some prescriptivists think they should.

  1. [1] “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000), 53.

By

Continua, Planes, and False Dichotomies

On Twitter, Erin Brenner asked, “How about a post on prescriptivism/descriptivism as a continuum rather than two sides? Why does it have to be either/or?” It’s a great question, and I firmly believe that it’s not an either-or choice. However, I don’t actually agree that prescriptivism and descriptivism occupy different points on a continuum, so I hope Erin doesn’t mind if I take this in a somewhat different direction from what she probably expected.

The problem with calling the two part of a continuum is that I don’t believe they’re on the same line. Putting them on a continuum, in my mind, implies that they share a common trait that is expressed to greater or lesser degrees, but the only real trait they share is that they are both approaches to language. But even this is a little deceptive, because one is an approach to studying language, while the other is an approach to using it.

I think the reason why we so often treat it as a continuum is that the more moderate prescriptivists tend to rely more on evidence and less on flat assertions. This makes us think of prescriptivists who don’t employ as much facts and evidence as occupying a point further along the spectrum. But I think this point of view does a disservice to prescriptivism by treating it as the opposite of fact-based descriptivism. This leads us to think that at one end, we have the unbiased facts of the language, and somewhere in the middle we have opinions based on facts, and at the other end, where undiluted prescriptivism lies, we have opinions that contradict facts. I don’t think this model makes sense or is really an accurate representation of prescriptivism, but unfortunately it’s fairly pervasive.

In its most extreme form, we find quotes like this one from Robert Hall, who, in defending the controversial and mostly prescription-free Webster’s Third, wrote: “The functions of grammars and dictionaries is to tell the truth about language. Not what somebody thinks ought to be the truth, nor what somebody wants to ram down somebody else’s throat, not what somebody wants to sell somebody else as being the ‘best’ language, but what people actually do when they talk and write. Anything else is not the truth, but an untruth.”[1]

But I think this is a duplicitous argument, especially for a linguist. If prescriptivism is “what somebody thinks ought to be the truth”, then it doesn’t have a truth value, because it doesn’t express a proposition. And although what is is truth, what somebody thinks should be is not its opposite, untruth.

So if descriptivism and prescriptivism aren’t at different points on a continuum, where are they in relation to each other? Well, first of all, I don’t think pure prescriptivism should be identified with evidence-free assertionism, as Eugene Volokh calls it. Obviously there’s a continuum of practice within prescriptivism, which means it must exist on a separate continuum or axis from descriptivism.

I envision the two occupying a space something like this:

graph of descriptivism and prescriptivism

Descriptivism is concerned with discovering what language is without assigning value judgements. Linguists feel that whether it’s standard or nonstandard, correct or incorrect by traditional standards, language is interesting and should be studied. That is, they try to stay on the right side of the graph, mapping out human language in all its complexity. Some linguists like Hall get caught up in trying to tear down prescriptivism, viewing it as a rival camp that must be destroyed. I think this is unfortunate, because like it or not, prescriptivism is a metalinguistic phenomenon that at the very least is worthy of more serious study.

Prescriptivism, on the other hand, is concerned with good, effective, or proper language. Prescriptivists try to judge what best practice is and formulate rules to map out what’s good or acceptable. In the chapter “Grammar and Usage” in The Chicago Manual of Style, Bryan Garner says his aim is to guide “writers and editors toward the unimpeachable uses of language” (16th ed., 5.219, 15th ed., 5.201).

Reasonable or moderate prescriptivists try to incorporate facts and evidence from actual usage in their prescriptions, meaning that they try to stay in the upper right of the graph. Some prescriptivists stray into untruth territory on the left and become unreasonable prescriptivists, or assertionists. No amount of evidence will sway them; in their minds, certain usages are just wrong. They make arguments from etymology or from overly literal or logical interpretations of meaning. And quite often, they say something’s wrong just because it’s a rule.

So it’s clearly not an either-or choice between descriptivism and prescriptivism. The only thing that’s not really clear, in my mind, is how much of prescriptivism is reliable. That is, do the prescriptions actually map out something we could call “good English”? Quite a lot of the rules serve little purpose beyond serving “as a sign that the writer is unaware of the canons of usage”, to quote the usage entry on hopefully in the American Heritage Dictionary (5th ed.). Linguists have been so preoccupied with trying to debunk or discredit prescriptivism that they’ve never really stopped to investigate whether there’s any value to prescriptivists’ claims. True, there have been a few studies along those lines, but I think they’re just scratching the surface of what could be an interesting avenue of study. But that’s a topic for another time.

  1. [1] In Harold B. Allen et al., “Webster’s Third New International Dictionary: A Symposium,” Quarterly Journal of Speech 48 (December 1962): 434.

By

It’s Not Wrong, but You Still Shouldn’t Do It

A couple of weeks ago, in my post “The Value of Prescriptivism,” I mentioned some strange reasoning that I wanted to talk about later—the idea that there are many usages that are not technically wrong, but you should still avoid them because other people think they’re wrong. I used the example of a Grammar Girl post on hopefully wherein she lays out the arguments in favor of disjunct hopefully and debunks some of the arguments against it—and then advises, “I still have to say, don’t do it.” She then adds, however, “I am hopeful that starting a sentence with hopefully will become more acceptable in the future.”

On the face of it, this seems like a pretty reasonable approach. Sometimes the considerations of the reader have to take precedence over the facts of usage. If the majority of your readers will object to your word choice, then it may be wise to pick a different word. But there’s a different way to look at this, which is that the misinformed opinions of a very small but very vocal subset of readers take precedence over the facts and the opinions of others. Arnold Zwicky wrote about this phenomenon a few years ago in a Language Log post titled “Crazies win”.

Addressing split infinitives and the equivocal advice to avoid them unless it’s better not to, Zwicky says that “in practice, [split infinitive as last resort] is scarcely an improvement over [no split infinitives] and in fact works to preserve the belief that split infinitives are tainted in some way.” He then adds that the “only intellectually justifiable advice” is to “say flatly that there’s nothing wrong with split infinitives and you should use them whenever they suit you”. I agree wholeheartedly, and I’ll explain why.

The problem with the it’s-not-wrong-but-don’t-do-it philosophy is that, while it feels like a moderate, open-minded, and more descriptivist approach in theory, it is virtually indistinguishable from the it’s-wrong-so-don’t-do-it philosophy in practice. You can cite all the linguistic evidence you want, but it’s still trumped by the fact that you’d rather avoid annoying that small subset of readers. It pays lip service to the idea of descriptivism informing your prescriptions, but the prescription is effectively the same. All you’ve changed is the justification for avoiding the usage.

Even more neutral and descriptive pieces like this New York Times “On Language” article on singular they ends with a wistful, “It’s a shame that grammarians ever took umbrage at the singular they,” adding, “Like it or not, the universal they isn’t universally accepted — yet. Its fate is now in the hands of the jury, the people who speak the language.” Even though the authors seem to be avoiding giving out advice, it’s still implicit in the conclusion. It’s great to inform readers about the history of usage debates, but what they’ll most likely come away with is the conclusion that it’s wrong—or at least tainted—so they shouldn’t use it.

The worst thing about this waffly kind of advice, I think, is that it lets usage commentators duck responsibility for influencing usage. They tell you all the reasons why it should be alright to use hopefully or split infinitives or singular they, but then they sigh and put them away in the linguistic hope chest, telling you that you can’t use them yet, but maybe someday. Well, when? If all the usage commentators are saying, “It’s not acceptable yet,” at what point are they going to decide that it suddenly is acceptable? If you always defer to the peevers and crazies, it will never be acceptable (unless they all happen to die off without transmitting their ideas to the next generation).

And furthermore, I’m not sure it’s a worthwhile endeavor to try to avoid offending or annoying anyone in your writing. It reminds me of Aesop’s fable of the man, the boy, and the donkey: people will always find something to criticize, so it’s impossible to behave (or write) in such a way as to always avoid criticism. As the old man at the end says, “Please all, and you will please none.” You can’t please everyone, so you have to make a choice: will you please the small but vocal peevers, or the more numerous reasonable people? If you believe there’s nothing technically wrong with hopefully or singular they, maybe you should stand by those beliefs instead of caving to the critics. And perhaps through your reasonable but firm advice and your own exemplary writing, you’ll help a few of those crazies come around.