Arrant Pedantry

By

12 Mistakes Nearly Everyone Who Writes About Grammar Mistakes Makes

There are a lot of bad grammar posts in the world. These days, anyone with a blog and a bunch of pet peeves can crank out a click-bait listicle of supposed grammar errors. There’s just one problem—these articles are often full of mistakes of one sort or another themselves. Once you’ve read a few, you start noticing some patterns. Inspired by a recent post titled “Grammar Police: Twelve Mistakes Nearly Everyone Makes”, I decided to make a list of my own.

1. Confusing grammar with spelling, punctuation, and usage. Many people who write about grammar seem to think that grammar means “any sort of rule of language, especially writing”. But strictly speaking, grammar refers to the structural rules of language, namely morphology (basically the way words are formed from roots and affixes), phonology (the system of sounds in a language), and syntax (the way phrases and clauses are formed from words). Most complaints about grammar are really about punctuation, spelling (such as problems with you’re/your and other homophone confusion) or usage (which is often about semantics). This post, for instance, spends two of its twelve points on commas and a third on quotation marks.

2. Treating style choices as rules. This article says that you should always use an Oxford (or serial) comma (the comma before and or or in a list) and that quotation marks should always follow commas and periods, but the latter is true only in most American styles (linguists often put the commas and periods outside quotes, and so do many non-American styles), and the former is only true of some American styles. I may prefer serial commas, but I’m not going to insist that everyone who doesn’t use them is making a mistake. It’s simply a matter of style, and style varies from one publisher to the next.

3. Ignoring register. There’s a time and a place for following the rules, but the writers of these lists typically treat English as though it had only one register: formal writing. They ignore the fact that following the rules in the wrong setting often sounds stuffy and stilted. Formal written English is not the only legitimate form of the language, and the rules of formal written English don’t apply in all situations. Sure, it’s useful to know when to use who and whom, but it’s probably more useful to know that saying To whom did you give the book? in casual conversation will make you sound like a pompous twit.

4. Saying that a disliked word isn’t a word. You may hate irregardless (I do), but that doesn’t mean it’s not a word. If it has its own meaning and you can use it in a sentence, guess what—it’s a word. Flirgle, on the other hand, is not a word—it’s just a bunch of sounds that I strung together in word-like fashion. Irregardless and its ilk may not be appropriate for use in formal registers, and you certainly don’t have to like them, but as Stan Carey says, “‘Not a word’ is not an argument.”

5. Turning proposals into ironclad laws. This one happens more often than you think. A great many rules of grammar and usage started life as proposals that became codified as inviolable laws over the years. The popular that/which rule, which I’ve discussed at length before, began as a proposal—not “everyone gets this wrong” but “wouldn’t it be nice if we made a distinction here?” But nowadays people have forgotten that a century or so ago, this rule simply didn’t exist, and they say things like “This is one of the most common mistakes out there, and understandably so.” (Actually, no, you don’t understand why everyone gets this “wrong”, because you don’t realize that this rule is a relatively recent invention by usage commentators that some copy editors and others have decided to enforce.) It’s easy to criticize people for not following rules that you’ve made up.

6. Failing to discuss exceptions to rules. Invented usage rules often ignore the complexities of actual usage. Lists of rules such as these go a step further and often ignore the complexities of those rules. For example, even if you follow the that/which rule, you need to know that you can’t use that after a preposition or after the demonstrative pronoun that—you have to use a restrictive which. Likewise, the less/fewer rule is usually reduced to statements like “use fewer for things you can count”, which leads to ugly and unidiomatic constructions like “one fewer thing to worry about”. Affect and effect aren’t as simple as some people make them out to be, either; affect is usually a verb and effect a noun, but affect can also be a noun (with stress on the first syllable) referring to the outward manifestation of emotions, while effect can be a verb meaning to cause or to make happen. Sometimes dumbing down rules just makes them dumb.

7. Overestimating the frequency of errors. The writer of this list says that misuse of nauseous is “Undoubtedly the most common mistake I encounter.” This claim seems worth doubting to me; I can’t remember the last time I heard someone say “nauseous”. Even if you consider it a misuse, it’s got to rate pretty far down the list in terms of frequency. This is why linguists like to rely on data for testable claims—because people tend to fall prey to all kinds of cognitive biases such as the frequency illusion.

8. Believing that etymology is destiny. Words change meaning all the time—it’s just a natural and inevitable part of language. But some people get fixated on the original meanings of some words and believe that those are the only correct meanings. For example, they’ll say that you can only use decimate to mean “to destroy one in ten”. This may seem like a reasonable argument, but it quickly becomes untenable when you realize that almost every single word in the language has changed meaning at some point, and that’s just in the few thousand years in which language has been written or can be reconstructed. And sometimes a new meaning is more useful anyway (which is precisely why it displaced an old meaning). As Jan Freeman said, “We don’t especially need a term that means ‘kill one in 10.’”

9. Simply bungling the rules. If you’re going to chastise people for not following the rules, you should know those rules yourself and be able to explain them clearly. You may dislike singular they, for instance, but you should know that it’s not a case of subject-predicate disagreement, as the author of this list claims—it’s an issue of pronoun-antecedent agreement, which is not the same thing. This list says that “‘less’ is reserved for hypothetical quantities”, but this isn’t true either; it’s reserved for noncount nouns, singular count nouns, and plural count nouns that aren’t generally thought of as discrete entities. Use of less has nothing to do with being hypothetical. And this one says that punctuation always goes inside quotation marks. In most American styles, it’s only commas and periods that always go inside. Colons, semicolons, and dashes always go outside, and question marks and exclamation marks only go inside sometimes.

10. Saying that good grammar leads to good communication. Contrary to popular belief, bad grammar (even using the broad definition that includes usage, spelling, and punctuation) is not usually an impediment to communication. A sentence like Ain’t nobody got time for that is quite intelligible, even though it violates several rules of Standard English. The grammar and usage of nonstandard varieties of English are often radically different from Standard English, but different does not mean worse or less able to communicate. The biggest differences between Standard English and all its nonstandard varieties are that the former has been codified and that it is used in all registers, from casual conversation to formal writing. Many of the rules that these lists propagate are really more about signaling to the grammatical elite that you’re one of them—not that this is a bad thing, of course, but let’s not mistake it for something it’s not. In fact, claims about improving communication are often just a cover for the real purpose of these lists, which is . . .

11. Using grammar to put people down. This post sympathizes with someone who worries about being crucified by the grammar police and then says a few paragraphs later, “All hail the grammar police!” In other words, we like being able to crucify those who make mistakes. Then there are the put-downs about people’s education (“You’d think everyone learned this rule in fourth grade”) and more outright insults (“5 Grammar Mistakes that Make You Sound Like a Chimp”). After all, what’s the point in signaling that you’re one of the grammatical elite if you can’t take a few potshots at the ignorant masses?

12. Forgetting that correct usage ultimately comes from users. The disdain for the usage of common people is symptomatic of a larger problem: forgetting that correct usage ultimately comes from the people, not from editors, English teachers, or usage commentators. You’re certainly entitled to have your opinion about usage, but at some point you have to recognize that trying to fight the masses on a particular point of usage (especially if it’s a made-up rule) is like trying to fight the rising tide. Those who have invested in learning the rules naturally feel defensive of them and of the language in general, but you have no more right to the language than anyone else. You can be restrictive if you want and say that Standard English is based on the formal usage of educated writers, but any standard that is based on a set of rules that are simply invented and passed down is ultimately untenable.

And a bonus mistake:

13. Making mistakes themselves. It happens to the best of us. The act of making grammar or spelling mistakes in the course of pointing out someone else’s mistakes even has a name, Muphry’s law. This post probably has its fair share of typos. (If you spot one, feel free to point it out—politely!—in the comments.)

This post also appears on Huffington Post.

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

The Enormity of a Usage Problem

Recently on Twitter, Mark Allen wrote, “Despite once being synonyms, ‘enormity’ and ‘enormousness’ are different. Try to keep ‘enormity’ for something evil or outrageous.” I’ll admit right off that this usage problem interests me because I didn’t learn about the distinction until a few years ago. To me, they’re completely synonymous, and the idea of using enormity to mean “an outrageous, improper, vicious, or immoral act” and not “the quality or state of being huge”, as Merriam-Webster defines it, seems almost quaint.

Of course, such usage advice presupposes that people are using the two words synonymously; if they weren’t, there’d be no reason to tell them to keep the words separate, so the assertion that they’re different is really an exhortation to make them different. Given that, I had to wonder how different they really are. I turned to Mark Davies Corpus of Contemporary American English to get an idea of how often enormity is used in the sense of great size rather than outrageousness or immorality. I looked at the first hundred results from the keyword-in-context option, which randomly samples the corpus, and tried to determine which of the four Merriam-Webster definitions was being used. For reference, here are the four definitions:

1 : an outrageous, improper, vicious, or immoral act enormities of state power — Susan Sontag> enormities too juvenile to mention — Richard Freedman>
2 : the quality or state of being immoderate, monstrous, or outrageous; especially : great wickedness enormity of the crimes committed during the Third Reich — G. A. Craig>
3 : the quality or state of being huge : immensity
enormity of the universe>
4 : a quality of momentous importance or impact
enormity of the decision>

In some cases it was a tough call; for instance, when someone writes about the enormity of poverty in India, enormity has a negative connotation, but it doesn’t seem right to substitute a word like monstrousness or wickedness. It seems that the author simply means the size of the problem. I tried to use my best judgement based on the context the corpus provides, but in some cases I weaseled out by assigning a particular use to two definitions. Here’s my count:

1: 1
2: 19
2/3: 3
3: 67
3/4: 1
4: 9

By far the most common use is in the sense of “enormousness”; the supposedly correct senses of great wickedness (definitions 1 and 2) are used just under a quarter of the time. So why did Mr. Allen say that enormity and enormousness were once synonyms? Even the Oxford English Dictionary marks the “enormousness” sense as obsolete and says, “Recent examples might perh. be found, but the use is now regarded as incorrect.” Perhaps? It’s clear from the evidence that it’s still quite common—about three times as common as the prescribed “monstrous wickedness” sense.

It’s true that the sense of immoderateness or wickedness came along before the sense of great size. The first uses as recorded in the OED are in the sense of “a breach of law or morality” (1477), “deviation from moral or legal rectitude” (1480), “something that is abnormal” (a1513), and “divergence from a normal standard or type” (a1538). The sense of “excess in magnitude”—the one that the OED marks as obsolete and incorrect—didn’t come along until 1792. In all these senses the etymology is clear: the word comes from enorm, meaning “out of the norm”.

As is to be expected, Merriam-Webster’s Dictionary of English Usage has an excellent entry on the topic. It notes that many of the uses of enormity considered objectionable carry shades of meaning or connotations not shown by enormousness:

Quite often enormity will be used to suggest a size that is beyond normal bounds, a size that is unexpectedly great. Hence the notion of monstrousness may creep in, but without the notion of wickedness. . . .

In many instances the notion of great size is colored by aspects of the first sense of enormity as defined in Webster’s Second. One common figurative use blends together notions of immoderateness, excess, and monstrousness to suggest a size that is daunting or overwhelming.

Indeed, it’s the blending of senses that made it hard to categorize some of the uses that I came across in COCA. Enormousness does not seem to be a fitting replacement for those blended or intermediate senses, and, as MWDEU notes, it’s never been a popular word anyway. Interestingly, MWDEU also notes that “the reasons for stigmatizing the size sense of enormity are not known.” Perhaps it became rare in the 1800s, when the OED marked it obsolete, and the rule was created before the sense enjoyed a resurgence in the twentieth century. Whatever the reason, I don’t think it makes much sense to condemn the more widely used sense of a word just because it’s newer or was rare at some point in the past. MWDEU sensibly concludes, “We have seen that there is no clear basis for the ‘rule’ at all. We suggest that you follow the writers rather than the critics: writers use enormity with a richness and subtlety that the critics have failed to take account of. The stigmatized sense is entirely standard and has been for more than a century and a half.”

By

Funner Grammar

As I said in the addendum to my last post, maybe I’m not so ready to abandon the technical definition of grammar. In a recent post on Copyediting, Andrea Altenburg criticized the word funner in an ad for Chuck E. Cheese as “improper grammar”, and my first reaction was “That’s not grammar!”

That’s not entirely accurate, of course, as Matt Gordon pointed out to me on Twitter. The objection to funner was originally grammatical, and the Copyediting post does make an appeal to grammar. The argument goes like this: fun is properly a noun, not an adjective, and as a noun, it can’t take comparative or superlative degrees—no funner or funnest.

This seems like a fairly reasonable argument—if a word isn’t an adjective, it can’t inflect like one—but it isn’t the real argument. First of all, it’s not really true that fun was originally a noun. As Ben Zimmer explains in “Dear Apple: Stop the Funnification”, the noun fun arose in the late seventeenth century and was labeled by Samuel Johnson in the mid-1800s “as ‘a low cant word’ of the criminal underworld.” But the earliest citation for fun is as a verb, fourteen years earlier.

As Merriam-Webster’s Dictionary of English Usage
notes, “A couple [of usage commentators] who dislike it themselves still note how nouns have a way of turning into adjectives in English.” Indeed, this sort of functional shift—also called zero derivation or conversion by linguists because they change the part of speech without the means of prefixation or suffixation—is quite common in English. English lacks case endings and has little in the way of verbal endings, so it’s quite easy to change a word from one part of speech to another. The transformation of fun from a verb to a noun to an inflected adjective came slowly but surely.

As this great article explains, shifts in function or meaning usually happen in small steps. Once fun was established as a noun, you could say things like We had fun. This is unambiguously a noun—fun is the object of the verb have. But then you get constructions like The party was fun. This is structurally ambiguous—both nouns and adjectives can go in the slot after was.

This paves the way to analyze fun as an adjective. It then moved into attributive use, directly modifying a following noun, as in fun fair. Nouns can do this too, so once again the structure was ambiguous, but it was evidence that fun was moving further in the direction of becoming an adjective. In the twentieth century it started to be used in more unambiguously adjectival roles. MWDEU says that this accelerated after World War II, and Mark Davies COHA shows that it especially picked up in the last twenty years.

Once fun was firmly established as an adjective, the inflected forms funner and funnest followed naturally. There are only a handful of hits for either in COCA, which attests to the fact that they’re still fairly new and relatively colloquial. But let’s get back to Altenburg’s post.

She says that fun is defined as a noun and thus can’t be inflected for comparative or superlative forms, but then she admits that dictionaries also define fun as an adjective with the forms funner and funnest. But she waves away these definitions by saying, “However, dictionaries are starting to include more definitions for slang that are still not words to the true copyeditor.”

What this means is that she really isn’t objecting to funner on grammatical grounds (at least not in the technical sense); her argument simply reduces to an assertion that funner isn’t a word. But as Stan Carey so excellently argued, “‘Not a word’ is not an argument”. And even the grammatical objections are eroding; many people now simply assert that funner is wrong, even if they accept fun as an adjective, as Grammar Girl says here:

Yet, even people who accept that “fun” is an adjective are unlikely to embrace “funner” and “funnest.” It seems as if language mavens haven’t truly gotten over their irritation that “fun” has become an adjective, and they’ve decided to dig in their heels against “funner” and “funnest.”

It brings to mind the objection against sentential hopefully. Even though there’s nothing wrong with sentence adverbs or with hopefully per se, it was a new usage that drew the ire of the mavens. The grammatical argument against it was essentially a post hoc justification for a ban on a word they didn’t like.

The same thing has happened with funner. It’s perfectly grammatical in the sense that it’s a well-formed, meaningful word, but it’s fairly new and still highly informal and colloquial. (For the record, it’s not slang, either, but that’s a post for another day.) If you don’t want to use it, that’s your right, but stop saying that it’s not a word.

By

It’s All Grammar—So What?

It’s a frequent complaint among linguists that laypeople use the term grammar in such a loose and unsystematic way that it’s more or less useless. They say that it’s overly broad, encompassing many different types of rules, and that it allows people to confuse things as different as syntax and spelling. They insist that spelling, punctuation, and ideas such as style or formality are not grammar at all, that grammar is really just the rules of syntax and morphology that define the language.

Arnold Zwicky, for instance, has complained that grammar as it’s typically used refers to nothing more than a “grab-bag of linguistic peeve-triggers”. I think this is an overly negative view; yes, there are a lot of people who peeve about grammar, but I think that most people, when they talk about grammar, are thinking about how to say things well or correctly.

Some people take linguists’ insistence on the narrower, more technical meaning of grammar as a sign of hypocrisy. After all, they say, with something of a smirk, shouldn’t we just accept the usage of the majority? If almost everyone uses grammar in a broad and vague way, shouldn’t we consider that usage standard? Linguists counter that this really is an important distinction, though I think it’s fair to say that they have a personal interest here; they teach grammar in the technical sense and are dismayed when people misunderstand what they do.

I’ve complained about this myself, but I’m starting to wonder whether it’s really something to worry about. (Of course, I’m probably doubly a hypocrite, what with all the shirts I sell with the word grammar on them.) After all, we see similar splits between technical and popular terminology in a lot of other fields, and they seem to get by just fine.

Take the terms fruit and vegetable, for instance. In popular use, fruits are generally sweeter, while vegetables are more savory or bitter. And while most people have probably heard the argument that tomatoes are actually fruits, not vegetables, they might not realize that squash, eggplants, peppers, peas, green beans, nuts, and grains are fruits too, at least by the botanical definition. And vegetable doesn’t even have a botanical definition—it’s just any part of a plant (other than fruits or seeds) that’s edible. It’s not a natural class at all.

In a bit of editorializing, the Oxford English Dictionary adds this note after its first definition of grammar:

As above defined, grammar is a body of statements of fact—a ‘science’; but a large portion of it may be viewed as consisting of rules for practice, and so as forming an ‘art’. The old-fashioned definition of grammar as ‘the art of speaking and writing a language correctly’ is from the modern point of view in one respect too narrow, because it applies only to a portion of this branch of study; in another respect, it is too wide, and was so even from the older point of view, because many questions of ‘correctness’ in language were recognized as outside the province of grammar: e.g. the use of a word in a wrong sense, or a bad pronunciation or spelling, would not have been called a grammatical mistake. At the same time, it was and is customary, on grounds of convenience, for books professedly treating of grammar to include more or less information on points not strictly belonging to the subject.

There are a few points here to consider. The definition of grammar has not been solely limited to syntax and morphology for many years. Once it started branching out into notions of correctness, it made sense to treat grammar, usage, spelling, and pronunciation together. From there it’s a short leap to calling the whole collection grammar, since there isn’t really another handy label. And since few people are taught much in the way of syntax and morphology unless they’re majoring in linguistics, it’s really no surprise that the loose sense of grammar predominates. I’ll admit, however, that it’s still a little exasperating to see lists of grammar rules that everyone gets wrong that are just spelling rules or, at best, misused words.

The root of the problem is that laypeople use words in ways that are useful and meaningful to them, and these ways don’t always jibe with scientific facts. It’s the same thing with grammar; laypeople use it to refer to language rules in general, especially the ones they’re most conscious of, which tend to be the ones that are the most highly regulated—usage, spelling, and style. Again, issues of syntax, morphology, semantics, usage, spelling, and style don’t constitute a natural class, but it’s handy to have a word that refers to the aspects of language that most people are conscious of and concerned with.

I think there still is a problem, though, and it’s that most people generally have a pretty poor understanding of things like syntax, morphology, and semantics. Grammar isn’t taught much in schools anymore, so many people graduate from high school and even college without much of an understanding of grammar beyond spelling and mechanics. I got out of high school without knowing anything more advanced than prepositional phrases. My first grammar class in college was a bit of a shock, because I’d never even learned about things like the passive voice or dependent clauses before that point, so I have some sympathy for those people who think that grammar is mostly just spelling and punctuation with a few minor points of usage or syntax thrown in.

So what’s the solution? Well, maybe I’m just biased, but I think it’s to teach more grammar. I know this is easier said than done, but I think it’s important for people to have an understanding of how language works. A lot of people are naturally interested in or curious about language, and I think we do those students a disservice if all we teach them is never to use infer for imply and to avoid the passive voice. Grammar isn’t just a set of rules telling you what not to do; it’s also a fascinatingly complex and mostly subconscious system that governs the singular human gift of language. Maybe we just need to accept the broader sense of grammar and start teaching people all of what it is.

Addendum: I just came across a blog post criticizing the word funner as bad grammar, and my first reaction was “That’s not grammar!” It’s always easier to preach than to practice, but my reaction has me reconsidering my laissez-faire attitude. While it seems handy to have a catch-all term for language errors, regardless of what type they are, it also seems handy—probably more so—to distinguish between violations of the regulative rules and constitutive rules of language. But this leaves us right where we started.

By

Rules, Evidence, and Grammar

In case you haven’t heard, it’s National Grammar Day, and that seemed as good a time as any to reflect a little on the role of evidence in discussing grammar rules. (Goofy at Bradshaw of the Future apparently had the same idea.) A couple of months ago, Geoffrey Pullum made the argument in this post on Lingua Franca that it’s impossible to talk about what’s right or wrong in language without considering the evidence. Is singular they grammatical and standard? How do you know?

For most people, I think, the answer is pretty simple: you look it up in a source that you trust. If the source says it’s grammatical or correct, it is. If it doesn’t, it isn’t. Singular they is wrong because many authoritative sources say it is. End of story. And if you try to argue that the sources aren’t valid or reliable, you’re labelled an anything-goes type who believes we should just toss all the rules out the window and embrace linguistic anarchy.

The question is, where did these sources get their authority to say what’s right and wrong?

That is, when someone says that you should never use they as a singular pronoun or start a sentence with hopefully or use less with count nouns, why do you suppose that the rules they put forth are valid? The rules obviously haven’t been inscribed on stone tablets by the finger of the Lord, but they have to come from somewhere. Every language is different, and languages and constantly changing, so I think we have to recognize that there is no universal, objective truth when it comes to grammar and usage.

David Foster Wallace apparently fell into the trap of thinking that there was, unfortunately. In his famous Harper’s article “Tense Present: Democracy, English, and the Wars over Usage,” he quotes the introduction to The American College Dictionary, which says, “A dictionary can be an “authority” only in the sense in which a book of chemistry or of physics or of botany can be an “authority”: by the accuracy and the completeness of its record of the observed facts of the field examined, in accord with the latest principles and techniques of the particular science.”

He retorts,

This is so stupid it practically drools. An “authoritative” physics text presents the results of physicists’ observations and physicists’ theories about those observations. If a physics textbook operated on Descriptivist principles, the fact that some Americans believe that electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Theory to be included as a “valid” theory in the textbook—just as, for Dr. Fries, if some Americans use infer for imply, the use becomes an ipso facto “valid” part of the language.

The irony of his first sentence is almost overwhelming. Physics is a set of universal laws that can be observed and tested, and electricity works regardless of what anyone believes. Language, on the other hand, is quite different. In fact, Wallace tacitly acknowledges the difference—without explaining his apparent contradiction—immediately after: “It isn’t scientific phenomena they’re tabulating but rather a set of human behaviors, and a lot of human behaviors are—to be blunt—moronic. Try, for instance, to imagine an ‘authoritative’ ethics textbook whose principles were based on what most people actually do.”[1]

Now here he hits on an interesting question. Any argument about right or wrong in language ultimately comes down to one of two options: it’s wrong because it’s absolutely, objectively wrong, or it’s wrong because arbitrary societal convention says it’s wrong. The former is untenable, but the latter doesn’t give us any straightforward answers. If there is no objective truth in usage, then how do we know what’s right and wrong?

Wallace tries to make the argument about ethics; sloppy language leads to real problems like people accidentally eating poison mushrooms. But look at his gargantuan list of peeves and shibboleths on the first page of the article. How many of them lead to real ethical problems? Does singular they pose any kind of ethical problem? What about sentential hopefully or less with count nouns? I don’t think so.

So if there’s no ethical problem with disputed usage, then we’re still left with the question, what makes it wrong? Here we get back to Pullum’s attempt to answer the question: let’s look at the evidence. And, because we can admit, like Wallace, that some people’s behavior is moronic, let’s limit ourselves to looking at the evidence from those speakers and writers whose language can be said to be most standard. What we find even then is that a lot of the usage and grammar rules that have been put forth, from Bishop Robert Lowth to Strunk and White to Bryan Garner, don’t jibe with actual usage.

Edward Finegan seizes on this discrepancy in an article a few years back. In discussing sentential hopefully, he quotes Garner as saying that it is “all but ubiquitous—even in legal print. Even so, the word received so much negative attention in the 1970s and 1980s that many writers have blacklisted it, so using it at all today is a precarious venture. Indeed, careful writers and speakers avoid the word even in its traditional sense, for they’re likely to be misunderstood if they use it in the old sense”[2] Finegan says, “I could not help but wonder how a reflective and careful analyst could concede that hopefully is all but ubiquitous in legal print and claim in the same breath that careful writers and speakers avoid using it.”[3]

The problem when you start questioning the received wisdom on grammar and usage is that you make a lot of people very angry. In a recent conversation on Twitter, Mignon Fogarty, aka Grammar Girl, said, “You would not believe (or maybe you would) how much grief I’m getting for saying ‘data’ can sometimes be singular.” I responded, “Sadly, I can. For some people, grammar is more about cherished beliefs than facts, and they don’t like having them challenged.” They don’t want to hear arguments about authority and evidence and deriving rules from what educated speakers actually use. They want to believe that there’s some deeper truths that justify their preferences and peeves, and that’s probably not going to change anytime soon. But for now, I’ll keep trying.

  1. [1] David Foster Wallace, “Tense Present: Democracy, English, and the Wars over Usage,” Harper’s Monthly, April 2001, 47.
  2. [2] “Bryan A. Garner, A Dictionary of Modern Legal Usage, 2nd ed. (New York: Oxford University Press, 1995).
  3. [3] Edward Finegan, “Linguistic Prescription: Familiar Practices and New Perspectives,” Annual Review of Applied Linguistics (2003) 23, 216.

By

However

Several weeks ago, Bob Scopatz asked in a comment about the word however, specifically whether it should be preceded by a comma or a semicolon when it’s used between two clauses. He says that a comma always seems fine to him, but apparently this causes people to look askance at him.

The rule here is pretty straightforward, and Purdue’s Online Writing Lab has a nice explanation. Independent clauses joined by coordinating conjunctions are separated by a comma; independent clauses that are not joined by coordinating conjunctions or are joined by what OWL calls “conjunctive adverbs” require a semicolon.

I’ve also seen the terms “transitional adverb” and “transitional phrase,” though the latter usually refers to multiword constructions like as a result, for example, and so on. These terms are probably more accurate since (I believe) words and phrases like however are not, strictly speaking, conjunctions. Though they do show a relationship between two clauses, that relationship is more semantic or rhetorical than grammatical.

Since however falls into this group, it should be preceded by a semicolon, though it can also start a new sentence. Grammar-Monster.com has some nice illustrative examples:

I am leaving on Tuesday, however, I will be back on Wednesday to collect my wages.
I am leaving on Tuesday; however, I will be back on Wednesday to collect my wages.
I am leaving on Tuesday. However, I will be back on Wednesday to collect my wages.

The first example is incorrect, while the latter two are correct. Note that “however” is also followed by a comma. (But would also work here, though in that case it would be preceded by a comma and not followed by one.)

Bob also mentioned that he sometimes starts a sentence with “however,” and this usage is a little more controversial. Strunk & White and others forbade however in sentence- or clause-initial position, sometimes with the argument that in this position it can only mean “in whatever way” or “to whatever extent.”

It’s true that however is sometimes used this way, as in “However it is defined, the middle class is standing on shaky ground,” to borrow an example from COCA. But this is clearly different from the Grammar-Monster sentences above. In those, the punctuation—namely the comma after “however”—indicates that this is not the “in whatever way” however, but rather the “on the contrary” or “in spite of that” one.

Some editors fastidiously move sentence-initial “howevers” to a position later in the sentence, as in I will be back on Wednesday, however, to collect my wages. As long as it’s punctuated correctly, it’s fine in either location, so there’s no need to move it. But note that when it occurs in the middle of a clause, it’s surrounded by commas.

It’s possible that sentence-initial however could be ambiguous without the following comma, but even then the confusion is likely to be momentary. I don’t see this as a compelling reason to avoid sentence-initial however, though I do believe it’s important to punctuate it properly, with both a preceding semicolon or period and a following comma, to avoid tripping up the reader.

In a nutshell, however is an adverb, not a true conjunction, so it can’t join two independent clauses with just a comma. You can either join those clauses with a semicolon or separate them with a period. But either way, however should be set off by commas. When it’s in the middle of a clause, the commas go on both sides; when it’s at the beginning of a clause, it just needs a following comma. Hopefully this will help Bob (and others) stop getting those funny looks.

By

Comprised of Fail

A few days ago on Twitter, John McIntyre wrote, “A reporter has used ‘comprises’ correctly. I feel giddy.” And a couple of weeks ago, Nancy Friedman tweeted, “Just read ‘is comprised of’ in a university’s annual report. I give up.” I’ve heard editors confess that they can never remember how to use comprise correctly and always have to look it up. And recently I spotted a really bizarre use in Wired, complete with a subject-verb agreement problem: “It is in fact a Meson (which comprise of a quark and an anti-quark). “So what’s wrong with this word that makes it so hard to get right?

I did a project on “comprised of” for my class last semester on historical changes in American English, and even though I knew it was becoming increasingly common even in edited writing, I was still surprised to see the numbers. For those unfamiliar with the rule, it’s actually pretty simple: the whole comprises the parts, and the parts compose the whole. This makes the two words reciprocal antonyms, meaning that they describe opposite sides of a relationship, like buy/sell or teach/learn. Another way to look at it is that comprise essentially means “to be composed of,” while “compose” means “to be comprised in” (note: in, not of). But increasingly, comprise is being used not as an antonym for compose, but as a synonym.

It’s not hard to see why it’s happened. They’re extremely similar in sound, and each is equivalent to the passive form of the other. When “comprises” means the same thing as “is composed of,” it’s almost inevitable that some people are going to conflate the two and produce “is comprised of.” According to the rule, any instance of “comprised of” is an error that should probably be replaced with “composed of.” Regardless of the rule, this usage has risen sharply in recent decades, though it’s still dwarfed by “composed of.” (Though “composed of” appears to be in serious decline. I have no idea why). The following chart shows its frequency in COHA and the Google Books Corpus.

frequency of "comprised of" and "composed of" in COHA and Google Books

Though it still looks pretty small on the chart, “comprised of” now occurs anywhere from 21 percent as often as “composed of” (in magazines) to a whopping 63 percent as often (in speech) according to COCA. (It’s worth noting, of course, that the speech genre in COCA is composed of a lot of news and radio show transcripts, so even though it’s unscripted, it’s not exactly reflective of typical speech.)

frequency of "comprised of" by genre

What I find most striking about this graph is the frequency of “comprised of” in academic writing. It is often held that standard English is the variety of English used by the educated elite, especially in writing. In this case, though, academics are leading the charge in the spread of a nonstandard usage. Like it or not, it’s becoming increasingly more common, and the prestige lent to it by its academic feel is certainly a factor.

But it’s not just “comprised of” that’s the problem; remember that the whole comprises the parts, which means that comprise should be used with singular subjects and plural objects (or multiple subjects with multiple respective objects, as in The fifty states comprise some 3,143 counties; each individual state comprises many counties). So according to the rule, not only is The United States is comprised of fifty states an error, but so is The fifty states comprise the United States.

It can start to get fuzzy, though, when either the subject or the object is a mass or collective noun, as in “youngsters comprise 17% of the continent’s workforce,” to take an example from Mark Davies’ COCA. This kind of error may be harder to catch, because the relationship between parts and whole is a little more abstract.

And with all the data above, it’s important to remember that we’re seeing things that have made it into print. As I said above, many editors have to look up the rule every time they encounter a form of “comprise” in print, meaning that they’re more liable to make mistakes. It’s possible that many more editors don’t even know that there is a rule, and so they read past it without a second thought.

Personally, I gave up on the rule a few years ago when one day it struck me that I couldn’t recall the last time I’d seen it used correctly in my editing. It’s never truly ambiguous (though if you can find an ambiguous example that doesn’t require willful misreading, please share), and it’s safe to assume that if nearly all of our authors who use comprise do so incorrectly, then most of our readers probably won’t notice, because they think that’s the correct usage.

And who’s to say it isn’t correct now? When it’s used so frequently, especially by highly literate and highly educated writers and speakers, I think you have to recognize that the rule has changed. To insist that it’s always an error, no matter how many people use it, is to deny the facts of usage. Good usage has to have some basis in reality; it can’t be grounded only in the ipse dixits of self-styled usage authorities.

And of course, it’s worth noting that the “traditional” meaning of comprise is really just one in a long series of loosely related meanings the word has had since it was first borrowed into English from French in the 1400s, including “to seize,” “to perceive or comprehend,” “to bring together,” and “to hold.” Perhaps the new meaning of “compose” (which in reality is over two hundred years old at this point) is just another step in the evolution of the word.

By

More on That

As I said in my last post, I don’t think the distribution of that and which is adequately explained by the restrictive/nonrestrictive distinction. It’s true that nearly all thats are restrictive (with a few rare exceptions), but it’s not true that all restrictive relative pronouns are thats and that all whiches are nonrestrictive, even when you follow the traditional rule. In some cases that is strictly forbidden, and in other cases it is disfavored to varying degrees. Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in your analysis.

In researching the topic for this post, I’ve learned a couple of things: (1) I don’t know syntax as well as I should, and (2) the behavior of relatives in English, particularly that, is far more complex than most editors or pop grammarians realize. First of all, there’s apparently been a century-long argument over whether that is even a relative pronoun or actually some sort of relativizing conjunction or particle. (Some linguists seem to prefer the latter, but I won’t wade too deep into that debate.) Previous studies have looked at multiple factors to explain the variation in relativizers, including the animacy of the referent, the distance between the pronoun and its referent, the semantic role of the relative clause, and the syntactic role of the referent.

It’s often noted that that can’t follow a preposition and that it doesn’t have a genitive form of its own (it must use either whose or of which), but no usage guide I’ve seen ever makes mention of the fact that this pattern follows the accessibility hierarchy. That is, in a cross-linguistic analysis, linguists have found an order to the way in which relative clauses are formed. Some languages can only relativize subjects, others can do subjects and verbal objects, yet others can do subjects, verbal objects, and oblique objects (like the objects of prepositions), and so on. For any allowable position on the hierarchy, all positions to the left are also allowable. The hierarchy goes something like this:

subject ≥ direct object ≥ indirect object ≥ object of stranded preposition ≥ object of fronted preposition ≥ possessor noun phrase ≥ object of comparative particle

What is interesting is that that and the wh- relatives, who and which, occupy overlapping but different portions of the hierarchy. Who and which can relativize anything from subjects to possessors and possibly objects of comparative particles, though whose as the genitive form of which seems a little odd to some, and both sound odd if not outright ungrammatical with comparatives, as in The man than who I’m taller. But that can’t relativize objects of fronted prepositions or anything further down the scale.

Strangely, though, there are things that that can do that who and which can’t. That can sometimes function as a sort of relative adverb, equivalent to the relative adverbs why, where, or when or to which with a preposition. That is, you can say The day that we met, The day when we met, or The day on which we met, but not The day which we met. And which can relativize whole clauses (though some sticklers consider this ungrammatical), while that cannot, as in This author uses restrictive “which,” which bothers me a lot.

So what explains the differences between that and which or who? Well, as I mentioned above, some linguists consider that not a pronoun but a complementizer or conjunction (perhaps a highly pronominal one), making it more akin to the complementizer that, as in He said that relativizers were confusing. And some linguists have also proposed different syntactic structures for restrictive and nonrestrictive clauses, which could account for the limitation of that to restrictive clauses. If that is not a true pronoun but a complementizer, then that could account for its strange distribution. It can’t appear in nonrestrictive clauses, because they require a full pronoun like which or who, and it can’t appear after prepositions, because those constructions similarly require a pronoun. But it can function as a relative adverb, which a regular relative pronoun can’t do.

As I argued in my previous post, it seems that which and that do not occupy separate parts of a single paradigm but are part of two different paradigms that overlap. The differences between them can be characterized in a few different ways, but for some reason, grammarians have seized on the restrictive/nonrestrictive distinction and have written off the rest as idiosyncratic exceptions to the rule or as common errors (when they’ve addressed those points at all).

The proposal to disallow which in restrictive relative clauses, except in the cases where that is ungrammatical—sometimes called Fowler’s rule, though that’s not entirely accurate—is based on the rather trivial observation that all thats are restrictive and that all nonrestrictives are which. It then assumes that the converse is true (or should be) and tries to force all restrictives to be that and all whiches to be nonrestrictive (except for all those pesky exceptions, of course).

Garner calls Fowler’s rule “nothing short of brilliant,”[1] but I must disagree. It’s based on a rather facile analysis followed by some terrible logical leaps. And insisting on following a rule based on bad linguistic analysis is not only not helpful to the reader, it’s a waste of editors’ time. As my last post shows, editors have obviously worked very hard to put the rule into practice, but this is not evidence of its utility, let alone its brilliance. But a linguistic analysis that could account for all of the various differences between the two systems of relativization in English? Now that just might be brilliant.

Sources

Herbert F. W. Stahlke, “Which That,” Language 52, no. 3 (Sept. 1976): 584–610
Johan Van Der Auwera, “Relative That: A Centennial Dispute,” Journal of Lingusitics 21, no. 1 (March 1985): 149–79
Gregory R. Guy and Robert Bayley, “On the Choice of Relative Pronouns in English,” American Speech 70, no. 2 (Summer 1995): 148–62
Nigel Fabb, “The Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 26, no. 1 (March 1990): 57–77
Robert D. Borsley, “More on the Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 28, no. 1 (March 1992), 139–48

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”

By

Till Kingdom Come

The other day on Twitter, Bryan A. Garner posted, “May I ask a favor? Would all who read this please use the prep. ‘till’ in a tweet? Not till then will we start getting people used to it.” I didn’t help out, partly because I hate pleas of the “Repost this if you agree!” variety and partly because I knew it would be merely a symbolic gesture. Even if all of Garner’s followers and all of their followers used “till” in a tweet, it wouldn’t even be a blip on the radar of usage.

But it did get me thinking about the word till and the fact that a lot of people seem to regard it as incorrect and forms like 'til as correct. The assumption for many people seems to be that it’s a shortened form of until, so it requires an apostrophe to signal the omission. Traditionalists, however, know that although the two words are related, till actually came first, appearing in the language about four hundred years before until.

Both words came into English via Old Norse, where the preposition til had replaced the preposition to. (As I understand it, modern-day North Germanic languages like Swedish and Danish still use it this way.) Despite their similar appearances, to and till are not related; till comes from a different root meaning ‘end’ or ‘goal’ (compare modern German Ziel ‘goal’). Norse settlers brought the word til with them when they started raiding and colonizing northeastern Britain in the 800s.

There was also a compound form, until, from und + til. Und was another Old Norse preposition deriving from the noun und, which is cognate with the English word end. Till and until have been more or less synonymous throughout their history in English, despite their slightly different forms. And as a result of the haphazard process of spelling standardization in English, we ended up with two ls on till but only one on until. The apostrophized form 'til is an occasional variant that shows up far more in unedited than edited writing. Interestingly, the OED’s first citation for 'til comes from P. G. Perrin’s An Index to English in 1939: “Till, until, (’til), these three words are not distinguishable in meaning. Since ’til in speech sounds the same as till and looks slightly odd on paper, it may well be abandoned.”

Mark Davies’ Corpus of Historical American English, however, tells a slightly different story. It shows a slight increase in 'til since the mid-twentieth century, though it has been declining again slightly in the last thirty years. And keep in mind that these numbers come from a corpus of edited writing drawn from books, magazines, and newspapers. It may well be increasing much faster in unedited writing, with only the efforts of copy editors keeping it (mostly) out of print. This chart shows the relative proportions of the three forms—that is, the proportion of each compared to the total of all three.

Relative proportions of till, until, and 'til.

As Garner laments, till is becoming less and less common in writing and may all but disappear within the next century, though predicting the future of usage is always a guessing game, even with clear trends like this. Sometimes they spontaneously reverse, and it’s often not clear why. But why is till in decline? I honestly don’t know for sure, but I suspect it stems from either the idea that longer words are more formal or the perception that it’s a shortened form of until. Contractions and clipped forms are generally avoided in formal writing, so this could be driving till out of use.

Note that we don’t have this problem with to and unto, probably because to is one of the most common words in the language, occurring about 9,000 times per million words in the last decade in COHA. By comparison, unto occurs just under 70 times per million words. There’s no uncertainty or confusion about the use of spelling of to. We tend to be less sure of the meanings and spellings of less frequent words, and this uncertainty can lead to avoidance. If you don’t know which form is right, it’s easy to just not use it.

At any rate, many people are definitely unfamiliar with till and may well think that the correct form is 'til, as Gabe Doyle of Motivated Grammar did in this post four years ago, though he checked his facts and found that his original hunch was wrong.

He’s far from the only person who thought that 'til was correct. When my then-fiancee and I got our wedding announcements printed over eight years ago, the printer asked us if we really wanted “till” instead of “'til” (“from six till eight that evening”). I told him that yes, it was right, and he kind of shrugged and dropped the point, though I got the feeling he still thought I was wrong. He probably didn’t want to annoy a paying customer, though.

And though this is anecdotal and possibly falls prey to the recency illusion, it seems that 'til is on the rise in signage (frequently as ‘til, with a single opening quotation mark rather than an apostrophe), and I even spotted a til' the other day. (I wish I’d thought to get a picture of it.)

I think the evidence is pretty clear that, barring some amazing turnaround, till is dying. It’s showing up less in print, where it’s mostly been replaced by until, and the traditionally incorrect 'til may be hastening its death as people become unsure of which form is correct or even become convinced that till is wrong and 'til is right. I’ll keep using till myself, but I’m not holding out hope for a revival. Sorry, Garner.

%d bloggers like this: