Arrant Pedantry

By

Mother’s Day

Today is officially Mother’s Day, and as with other holidays with possessive or plural endings, there’s a lot of confusion about what the correct form of the name is. The creator of Mother’s Day in the United States, Anna Jarvis, specifically stated that it should be a singular possessive to focus on individual mothers rather than mothers in general. But as sociolinguist Matt Gordon noted on Twitter, “that logic is quite peccable”; though it’s a nice sentiment, it’s grammatical nonsense.

English has a singular possessive and a plural possessive; it does not have a technically-plural-but-focusing-on-the-singular possessive. Though Jarvis may have wanted everyone to focus on their respective mothers, the fact is that it still celebrates all mothers. If I told you that tomorrow was Jonathon’s Day, you’d assume that it’s my day, not that it’s the day for all Jonathons but that they happen to be celebrating separately. That’s simply not how grammatical number works in English. If you have more than one thing, it’s plural, even if you’re considering those things individually.

This isn’t the only holiday that employs some grammatically suspect reasoning in its official spelling—Veterans Day officially has no apostrophe because the day doesn’t technically belong to veterans. But this is silly—apostrophes are used for lots of things beyond simple ownership.

It could be worse, though. The US Board on Geographic Names discourages possessives altogether, though it allows the possessive s without an apostrophe. The peak named for Pike is Pikes Peak, which is worse than grammatical nonsense—it’s an officially enshrined error. The worst part is that there isn’t even a reason given for this policy, though presumably it’s because they don’t want to indicate private ownership of geographical features. (Again, the apostrophe doesn’t necessarily show ownership.) But in this case you can’t even argue that Pike is a plural attributive noun, because there’s only one Pike who named the peak.

The sad truth is that the people in charge of deciding where or whether to put apostrophes in things don’t always have the best grasp of grammar, and they don’t always think to consult someone who does. But even if the grammar of Mother’s Day makes me roll my eyes, I can still appreciate the sentiment. In the end, arguing about the placement of an apostrophe is a quibble. What matters most is what the day really means. And this day is for you, Mom.

By

Why Teach Grammar?

Today is National Grammar Day, and I’ve been thinking a lot lately about what grammar is and why we study it. Last week in the Atlantic, Michelle Navarre Cleary wrote that we should do away with diagramming sentences and other explicit grammar instruction. Her argument, in a nutshell, is that grammar instruction not only doesn’t help students write better, but it actually teaches them to hate writing.

It’s really no surprise—as an editor and a student of language, I’ve run into a lot of people who never learned the difference between a preposition and a participle and are insecure about their writing or their speech. I once had a friend who was apparently afraid to talk to me because she thought I was silently correcting everything she said. When I found out about it, I reassured her that I wasn’t; not only had I never noticed anything wrong with the way she talked, but I don’t worry about correcting people unless they’re paying me for it. But I worried that this was how people saw me: a know-it-all jerk who silently judged everyone else for their errors. I love language, and it saddened me to think that there are people who find it not fascinating but frustrating.

But given the state of grammar instruction in the United States today, it’s not hard to see why a lot of people feel this way. I learned hardly any sentence diagramming until I got to college, and my public school education in grammar effectively stopped in eighth or ninth grade when I learned what a prepositional phrase was. In high school, our grammar work consisted of taking sentences like “He went to the store” and changing them to “Bob went to the store” (because you can’t use he without an antecedent; never mind that such a sentence would not occur in isolation and would surely make sense in context).

Meanwhile, many students are marked down on their papers for supposed grammar mistakes (which are usually matters of spelling, punctuation, or style): don’t use contractions, don’t start a sentence with conjunctions, don’t use any form of the verb be, don’t write in the first person, don’t refer to yourself in the third person, don’t use the passive voice, and on and on. Of course most students are going to come out of writing class feeling insecure. They’re punished for failing to master rules that don’t make sense.

And it doesn’t help that there’s often a disconnect between what the rules say good writing is and what it actually is. Good writing breaks these rules all the time, and following all the rules does little if anything to make bad writing good. We know the usual justifications: students have to master the basics before they can become experts, and once they become experts, they’ll know when it’s okay to break the rules.

But these justifications presuppose that teaching students not to start a sentence with a conjunction or not to use the passive voice has something to do with good writing, when it simply doesn’t. I’ve said before that we don’t consider whether we’re giving students training wheels or just putting sticks in their spokes. Interestingly, Cleary uses a similar argument in her Atlantic piece: “Just as we teach children how to ride bikes by putting them on a bicycle, we need to teach students how to write grammatically by letting them write.”

I’m still not convinced, though, that learning grammar has much at all to do with learning to write. Having a PhD in linguistics doesn’t mean you know how to write well, and being an expert writer doesn’t mean you know anything about syntax and morphology beyond your own native intuition. And focusing on grammar instruction may distract from the more fundamental writing issues of rhetoric and composition. So why worry about grammar at all if it has nothing to do with good writing? Language Log’s Mark Liberman said it well:

We don’t put chemistry into the school curriculum because it will make students better cooks, or even because it might make them better doctors, much less because we need a relatively small number of professional chemists. We believe (I hope) that a basic understanding of atoms and molecules is knowledge that every citizen of the modern world should have.

It may seem like a weak defense in a world that increasingly focuses on marketable skills, but it’s maybe the best justification we have. Language is amazing; no other animal has the capacity for expression that we do. Language is so much more than a grab-bag of peeves and strictures to inflict on freshman writing students; it’s a fundamental part of who we are as a species. Shouldn’t we expect an educated person to know something about it?

So yes, I think we should teach grammar, not because it will help people write better, but simply because it’s interesting and worth knowing about. But we need to recognize that it doesn’t belong in the same class as writing or literature; though it certainly has connections to both, linguistics is a separate field and should be treated as such. And we need to teach grammar not as something to hate or even as something to learn as a means to an end, but as a fascinating and complex system to be discovered and explored for its own sake. In short, we need to teach grammar as something to love.

By

12 Mistakes Nearly Everyone Who Writes About Grammar Mistakes Makes

There are a lot of bad grammar posts in the world. These days, anyone with a blog and a bunch of pet peeves can crank out a click-bait listicle of supposed grammar errors. There’s just one problem—these articles are often full of mistakes of one sort or another themselves. Once you’ve read a few, you start noticing some patterns. Inspired by a recent post titled “Grammar Police: Twelve Mistakes Nearly Everyone Makes”, I decided to make a list of my own.

1. Confusing grammar with spelling, punctuation, and usage. Many people who write about grammar seem to think that grammar means “any sort of rule of language, especially writing”. But strictly speaking, grammar refers to the structural rules of language, namely morphology (basically the way words are formed from roots and affixes), phonology (the system of sounds in a language), and syntax (the way phrases and clauses are formed from words). Most complaints about grammar are really about punctuation, spelling (such as problems with you’re/your and other homophone confusion) or usage (which is often about semantics). This post, for instance, spends two of its twelve points on commas and a third on quotation marks.

2. Treating style choices as rules. This article says that you should always use an Oxford (or serial) comma (the comma before and or or in a list) and that quotation marks should always follow commas and periods, but the latter is true only in most American styles (linguists often put the commas and periods outside quotes, and so do many non-American styles), and the former is only true of some American styles. I may prefer serial commas, but I’m not going to insist that everyone who doesn’t use them is making a mistake. It’s simply a matter of style, and style varies from one publisher to the next.

3. Ignoring register. There’s a time and a place for following the rules, but the writers of these lists typically treat English as though it had only one register: formal writing. They ignore the fact that following the rules in the wrong setting often sounds stuffy and stilted. Formal written English is not the only legitimate form of the language, and the rules of formal written English don’t apply in all situations. Sure, it’s useful to know when to use who and whom, but it’s probably more useful to know that saying To whom did you give the book? in casual conversation will make you sound like a pompous twit.

4. Saying that a disliked word isn’t a word. You may hate irregardless (I do), but that doesn’t mean it’s not a word. If it has its own meaning and you can use it in a sentence, guess what—it’s a word. Flirgle, on the other hand, is not a word—it’s just a bunch of sounds that I strung together in word-like fashion. Irregardless and its ilk may not be appropriate for use in formal registers, and you certainly don’t have to like them, but as Stan Carey says, “‘Not a word’ is not an argument.”

5. Turning proposals into ironclad laws. This one happens more often than you think. A great many rules of grammar and usage started life as proposals that became codified as inviolable laws over the years. The popular that/which rule, which I’ve discussed at length before, began as a proposal—not “everyone gets this wrong” but “wouldn’t it be nice if we made a distinction here?” But nowadays people have forgotten that a century or so ago, this rule simply didn’t exist, and they say things like “This is one of the most common mistakes out there, and understandably so.” (Actually, no, you don’t understand why everyone gets this “wrong”, because you don’t realize that this rule is a relatively recent invention by usage commentators that some copy editors and others have decided to enforce.) It’s easy to criticize people for not following rules that you’ve made up.

6. Failing to discuss exceptions to rules. Invented usage rules often ignore the complexities of actual usage. Lists of rules such as these go a step further and often ignore the complexities of those rules. For example, even if you follow the that/which rule, you need to know that you can’t use that after a preposition or after the demonstrative pronoun that—you have to use a restrictive which. Likewise, the less/fewer rule is usually reduced to statements like “use fewer for things you can count”, which leads to ugly and unidiomatic constructions like “one fewer thing to worry about”. Affect and effect aren’t as simple as some people make them out to be, either; affect is usually a verb and effect a noun, but affect can also be a noun (with stress on the first syllable) referring to the outward manifestation of emotions, while effect can be a verb meaning to cause or to make happen. Sometimes dumbing down rules just makes them dumb.

7. Overestimating the frequency of errors. The writer of this list says that misuse of nauseous is “Undoubtedly the most common mistake I encounter.” This claim seems worth doubting to me; I can’t remember the last time I heard someone say “nauseous”. Even if you consider it a misuse, it’s got to rate pretty far down the list in terms of frequency. This is why linguists like to rely on data for testable claims—because people tend to fall prey to all kinds of cognitive biases such as the frequency illusion.

8. Believing that etymology is destiny. Words change meaning all the time—it’s just a natural and inevitable part of language. But some people get fixated on the original meanings of some words and believe that those are the only correct meanings. For example, they’ll say that you can only use decimate to mean “to destroy one in ten”. This may seem like a reasonable argument, but it quickly becomes untenable when you realize that almost every single word in the language has changed meaning at some point, and that’s just in the few thousand years in which language has been written or can be reconstructed. And sometimes a new meaning is more useful anyway (which is precisely why it displaced an old meaning). As Jan Freeman said, “We don’t especially need a term that means ‘kill one in 10.'”

9. Simply bungling the rules. If you’re going to chastise people for not following the rules, you should know those rules yourself and be able to explain them clearly. You may dislike singular they, for instance, but you should know that it’s not a case of subject-predicate disagreement, as the author of this list claims—it’s an issue of pronoun-antecedent agreement, which is not the same thing. This list says that “‘less’ is reserved for hypothetical quantities”, but this isn’t true either; it’s reserved for noncount nouns, singular count nouns, and plural count nouns that aren’t generally thought of as discrete entities. Use of less has nothing to do with being hypothetical. And this one says that punctuation always goes inside quotation marks. In most American styles, it’s only commas and periods that always go inside. Colons, semicolons, and dashes always go outside, and question marks and exclamation marks only go inside sometimes.

10. Saying that good grammar leads to good communication. Contrary to popular belief, bad grammar (even using the broad definition that includes usage, spelling, and punctuation) is not usually an impediment to communication. A sentence like Ain’t nobody got time for that is quite intelligible, even though it violates several rules of Standard English. The grammar and usage of nonstandard varieties of English are often radically different from Standard English, but different does not mean worse or less able to communicate. The biggest differences between Standard English and all its nonstandard varieties are that the former has been codified and that it is used in all registers, from casual conversation to formal writing. Many of the rules that these lists propagate are really more about signaling to the grammatical elite that you’re one of them—not that this is a bad thing, of course, but let’s not mistake it for something it’s not. In fact, claims about improving communication are often just a cover for the real purpose of these lists, which is . . .

11. Using grammar to put people down. This post sympathizes with someone who worries about being crucified by the grammar police and then says a few paragraphs later, “All hail the grammar police!” In other words, we like being able to crucify those who make mistakes. Then there are the put-downs about people’s education (“You’d think everyone learned this rule in fourth grade”) and more outright insults (“5 Grammar Mistakes that Make You Sound Like a Chimp”). After all, what’s the point in signaling that you’re one of the grammatical elite if you can’t take a few potshots at the ignorant masses?

12. Forgetting that correct usage ultimately comes from users. The disdain for the usage of common people is symptomatic of a larger problem: forgetting that correct usage ultimately comes from the people, not from editors, English teachers, or usage commentators. You’re certainly entitled to have your opinion about usage, but at some point you have to recognize that trying to fight the masses on a particular point of usage (especially if it’s a made-up rule) is like trying to fight the rising tide. Those who have invested in learning the rules naturally feel defensive of them and of the language in general, but you have no more right to the language than anyone else. You can be restrictive if you want and say that Standard English is based on the formal usage of educated writers, but any standard that is based on a set of rules that are simply invented and passed down is ultimately untenable.

And a bonus mistake:

13. Making mistakes themselves. It happens to the best of us. The act of making grammar or spelling mistakes in the course of pointing out someone else’s mistakes even has a name, Muphry’s law. This post probably has its fair share of typos. (If you spot one, feel free to point it out—politely!—in the comments.)

This post also appears on Huffington Post.

By

The Reason Why This Is Correct

There’s a long-running debate over whether the construction reason why is acceptable. Critics generally argue that why essentially means reason, so saying reason why is like saying reason twice. Saying something twice is redundant, and redundancy is bad; ergo, reason why is bad. This is really a rather bizarre argument. Reason is a noun; why is usually an interrogative adverb. They do cover some of the same semantic space, but not the same syntactic space. Does this really make the construction redundant? Defendants generally admit that it’s redundant, but in a harmless way. But rebutting the critics by calling it “not ungrammatical” or saying that “redundancy is not inherently bad” is a pretty weak defense. However, that defense can be strengthened with the addition of something that has been missing from the discussion: an examination of the syntactic role of why in such constructions.

Nearly every discussion on reason why that I’ve ever seen—including Merriam-Webster’s Dictionary of English Usage and Garner’s Modern American Usage—leaves out this very important syntactic component. The only exceptions that I’ve seen are this post on the Grammarphobia blog and this one on Daily Writing Tips, which both mention that why is a conjunction. The writers at Grammarphobia argue that reason why is not actually redundant because of why’s syntactic role, but Mark Nichol at Daily Writing Tips seems much more confused about the issue. He says that even though reason why has been around for centuries and only came under fire in the twentieth century, he’ll continue to avoid it in his own writing “but will forgive the combination when I am editing that of others” (how magnanimous). But he doesn’t understand why reason why is okay but reason is because is not, because both why and because are conjunctions.

I won’t get into reason is because here, but suffice it to say that these are very different constructions. As I mentioned in my previous post on relative pronouns and adverbs, why functions as a relative adverb, but it appears almost exclusively after the word reason. (To be clear, all relative pronouns and adverbs can be considered conjunctions because they connect a subordinate clause—the relative clause—to a main one.) In a phrase like the reason why this is correct, why connects the relative clause this is correct to the noun it modifies, reason. Relative pronouns refer to a noun phrase, while relative adverbs refer to some kind of adverbial phrase. As with any relative clause, you can extract a main clause out of the relative clause by replacing the relative pronoun or adverb and doing a little rearranging (that’s the man who I met > I met the man), though with relative adverbs you often have to add in a function word or two: the reason why this is correct > this is correct for this reason. This is pretty obvious when you think about it. A phrase like the reason why this is correct contains another clause—this is correct. There has to be something to connect it syntactically to the rest of the phrase.

In defending the construction, Gabe Doyle at Motivated Grammar compares it to the redundancy in The person who left their wet swimsuit on my books is going to pay. This is actually a more apt comparison than Mr. Doyle realizes, because he doesn’t make the connection between the relative pronoun who and the relative adverb why. He argues that it is just as redundant as reason why (and therefore not a problem), because who means person in a sense.

But as I said above, this isn’t really redundancy. Who is a relative pronoun connecting a clause to a noun phrase. If who means the same thing as person, it’s only because that’s its job as a pronoun. Pronouns are supposed to refer to other things in the sentence, and thus they mean the same thing. Why works much the same way. Why means the same thing as reason only because it refers to it.

So what about reason that or just plain reason? Again, as I discussed in my last post on relative pronouns and adverbs, English has two systems of relativization: the wh words and that, and that is omissible except where it functions as the subject of the relative clause. Thus we have the option of saying the reason why this is correct, the reason that this is correct (though that sounds awkward in some instances), or just plain the reason this is correct (again, this is occasionally awkward). The Cambridge Grammar of the English Language also mentions the possibility the reason for which, though this also sounds awkward and stilted in most cases. But I suspect that many awkward plain reasons are the result of editorial intervention, as in this case I found in the research for my thesis: There are three preliminary reasons why the question of rationality might make a difference in the context of Leibniz’s thought.

It’s important to note, though, that there are some constructions in which why is more superfluous. As Robert Lane Greene noted on the Johnson blog, sometimes why is used after reason without a following relative clause. (Mr. Greene calls it a complement clause.) He gives the example I’m leaving your father. The reason why is that he’s a drunk. The why here doesn’t really serve a syntactic function, since it’s not introducing a clause, though the Oxford English Dictionary calls this an elliptical construction. In essence, the why is serving as a placeholder for the full relative clause: I’m leaving your father. The reason why (I’m leaving him) is that he’s a drunk. It’s not strictly necessary to delete the why here, though it is generally colloquial and may not sound right in formal writing.

But this is by no means a blanket injunction against reason why. I think the rule forbidding reason why probably arose out of simple grammatical misanalysis of this relative construction, or perhaps by broadening a ban on elliptical reason why into a ban on all instances of reason why. Whatever the reason for the ban, it’s misguided and should be laid to rest. Reason why is not only not ungrammatical or harmlessly redundant, but it’s a legitimately correct and fully grammatical construction. Just because there are other options doesn’t mean one is right and the rest are wrong.

By

Who Edits the Editors?

Today is National Grammar Day, in case you hadn’t heard, and to celebrate I want to take a look at some of those who hold themselves up to be defenders of the English language: copy editors. A few weeks ago, the webcomic XKCD published this comic mocking the participants in a Wikipedia edit war over the title of Star Trek into Darkness. The question was whether “into” in the title should be capitalized. Normally, prepositions in titles are lowercase, but if there’s an implied colon after “Star Trek”, then “Into Darkness” is technically a subtitle, and the first word of a subtitle gets capitalized. As the comic noted, forty thousand words of argument back and forth had been written, but no consensus was in sight. (The discussion seems to be gone now.)

The Wikipedia discussion is an apt illustration of one of the perils of editing: long, drawn-out and seemingly irresolvable discussions about absolutely trivial things. Without prior knowledge about whether “into darkness” is a subtitle or part of the title, there’s no clear answer. It’s a good example of Parkinson’s law of triviality at work. Everyone wants to put in their two cents’ worth, but the debate will never end unless someone with authority simply makes an arbitrary but final decision one way or the other.

I wouldn’t have thought much else of that discussion if not for the fact that it was picked up by Nathan Heller in a column called “Copy-Editing the Culture” over at Slate. Someone cited one of my posts—“It’s just a joke. But no, seriously“—in the discussion in the comments, so I followed the link back and read the column. And what I found dismayed me.

The article begins (after a couple of paragraphs of self-indulgence) by claiming that “it is . . . entirely unclear what the title is trying to communicate.” This complaint is puzzling, since it seems fairly obvious what the title is supposed to mean, but the problems with the column become clearer as the reasoning becomes murkier: “Are there missing words—an implied verb, for example? The grammatical convention is to mark such elisions with a comma: Star Trek Going Into Darkness could become, conceivably, Star Trek, Into Darkness“. An implied verb? Marking such elisions with a comma? What on earth is he on about? I don’t see any reason why the title needs a verb, and I’ve never heard of marking elided verbs with a comma. Marking an elided “and” in headlines, perhaps, but that’s it.

[Update: It occurred to me what he probably meant, and I feel stupid for not seeing it. It’s covered under 6.49 in the 16th edition ofChicago. A comma may be used to signal the elision of a word or words easily understood from context, though what they don’t say is that it’s a repeated word or words, and that’s crucial. One example they give is In Illinois there are seventeen such schools; in Ohio, twenty; in Indiana, thirteen. The comma here indicates the elision of there are. The Star Trek, Into Darkness example doesn’t work because it’s a title with no other context. There aren’t any repeated words that are understood from context and are thus candidates for elision. I could say, “Star Wars is going into light; Star Trek, into darkness”, but Star Trek, into Darkness” simply doesn’t make sense under any circumstances, which is probably why I didn’t get what Heller meant.]

The article continues to trek into darkness with ever more convoluted reasoning: “Or perhaps the film’s creators intend Star Trek to be understood as a verb—to Star Trek—turning the title into an imperative: ‘Star Trek into darkness!'” Yes, clearly that’s how it’s to be understood—as an imperative! I suppose Journey to the Center of the Earth is intended to be read the same way. But Heller keeps on digging: “Perhaps those two words [Star Trek] are meant to function individually [whatever that means]. . . . If trek is a verb—“We trek into darkness”—what, precisely, is going on with the apparent subject of the sentence, star? Why is it not plural, to match the verb form: Stars Trek Into Darkness? Or if trek is a noun—“His trek into darkness”—where is the article or pronoun that would give the title sense: A Star Trek Into Darkness? And what, for that matter, is a star trek?”

This is perhaps the stupidest passage about grammar that I’ve ever read. Star Trek is a noun-noun compound, not a noun and a verb, as is clear from their lack of grammatical agreement. A star trek is a trek among the stars. Titles don’t need articles—remember Journey to the Center of the Earth? (Yes, I know that it is sometimes translated as A Journey to the Center of the Earth, but the article is optional and doesn’t exist in the original French.)

I know that some of you are thinking, “It’s a joke! Lighten up!” Obviously this argument has already occurred in the comments, which is why my post was linked to. I’ll grant that it’s probably intended to be a joke, but if so it’s the lamest, most inept language-related joke I’ve ever read. It’s like a bookkeeper feigning confusion about the equation 2 + 2 = 4, asking, “Shouldn’t it be 2 + 2 = 22?” Not only does Heller’s piece revel in grammatical ineptitude, but it reinforces the stereotype of editors as small-minded and officious pedants.

I’ve worked as a copy editor and layout artist for over ten years, and I’ve worked with a lot of different people in that time. I’ve known some really great editors and some really bad ones, and I think that even the best of us tend to get hung up on trivialities like whether to capitalize into far more than we should. When I first saw the Slate column, I hoped that it would address some of those foibles, but instead it took a turn for the insipid and never looked back. I looked at a few more entries in the column, and they all seem to work about the same way, seizing on meaningless trivialities and trying to make them seem significant.

So I have a plea for you this Grammar Day: stop using grammar as the butt of lame jokes or as a tool for picking apart people or things that you don’t like. And if that is how you’re going to use it, at least try to make sure you know what you’re talking about first. You’re making the rest of us look bad.

By

Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.

By

Hanged and Hung

The distinction between hanged and hung is one of the odder ones in the language. I remember learning in high school that people are hanged, pictures are hung. There was never any explanation of why it was so; it simply was. It was years before I learned the strange and complicated history of these two words.

English has a few pairs of related verbs that are differentiated by their transitivity: lay/lie, rise/raise, and sit/set. Transitive verbs take objects; intransitive ones don’t. In each of these pairs, the intransitive verb is strong, and the transitive verb is weak. Strong verbs inflect for the preterite (simple past) and past participle forms by means of a vowel change, such as sing–sang–sung. Weak verbs add the -(e)d suffix (or sometimes just a -t or nothing at all if the word already ends in -t). So lie–lay–lain is a strong verb, and lay–laid–laid is weak. Note that the subject of one of the intransitive verbs becomes the object when you use its transitive counterpart. The book lay on the floor but I laid the book on the floor.

Historically hang belonged with these pairs, and it ended up in its current state through the accidents of sound change and history. It was originally two separate verbs (the Oxford English Dictionary actually says it was three—two Old English verbs and one Old Norse verb—but I don’t want to go down that rabbit hole) that came to be pronounced identically in their present-tense forms. They still retained their own preterite and past participle forms, though, so at one point in Early Modern English hang–hung–hung existed alongside hang–hanged–hanged.

Once the two verbs started to collapse together, the distinction started to become lost too. Just look at how much trouble we have keeping lay and lie separate, and they only overlap in the present lay and the past tense lay. With identical present tenses, hang/hang began to look like any other word with a choice between strong and weak past forms, like dived/dove or sneaked/snuck. The transitive/intransitive distinction between the two effectively disappeared, and hung won out as the preterite and past participle form.

The weak transitive hanged didn’t completely vanish, though; it stuck around in legal writing, which tends to use a lot of archaisms. Because it was only used in legal writing in the sense of hanging someone to death (with the poor soul as the object of the verb), it picked up the new sense that we’re now familiar with, whether or not the verb is transitive. Similarly, hung is used for everything but people, whether or not the verb is intransitive.

Interestingly, German has mostly hung on to the distinction. Though the German verbs both merged in the present tense into hängen, the past forms are still separate: hängen–hing–gehungen for intransitive forms and hängen–hängte–gehängt for transitive. Germans would say the equivalent of I hanged the picture on the wall and The picture hung on the wall—none of this nonsense about only using hanged when it’s a person hanging by the neck until dead.

The surprising thing about the distinction in English is that it’s observed (at least in edited writing) so faithfully. Usually people aren’t so good at honoring fussy semantic distinctions, but here I think the collocates do a lot of the work of selecting one word or the other. Searching for collocates of both hanged and hung in COCA, we find the following words:

hanged:
himself
man
men
herself
themselves
murder
convicted
neck
effigy
burned

hung:
up
phone
air
wall
above
jury
walls
hair
ceiling
neck

The hanged words pretty clearly all hanging people, whether by suicide, as punishment for murder, or in effigy. (The collocations with burned were all about hanging and burning people or effigies.) The collocates for hung show no real pattern; it’s simply used for everything else. (The collocations with neck were not about hanging by the neck but about things being hung from or around the neck.)

So despite what I said about this being one of the odder distinctions in the language, it seems to work. (Though I’d like to know to what extent, if any, the distinction is an artifact of the copy editing process.) Hung is the general-use word; hanged is used when a few very specific and closely related contexts call for it.

By

Funner Grammar

As I said in the addendum to my last post, maybe I’m not so ready to abandon the technical definition of grammar. In a recent post on Copyediting, Andrea Altenburg criticized the word funner in an ad for Chuck E. Cheese as “improper grammar”, and my first reaction was “That’s not grammar!”

That’s not entirely accurate, of course, as Matt Gordon pointed out to me on Twitter. The objection to funner was originally grammatical, and the Copyediting post does make an appeal to grammar. The argument goes like this: fun is properly a noun, not an adjective, and as a noun, it can’t take comparative or superlative degrees—no funner or funnest.

This seems like a fairly reasonable argument—if a word isn’t an adjective, it can’t inflect like one—but it isn’t the real argument. First of all, it’s not really true that fun was originally a noun. As Ben Zimmer explains in “Dear Apple: Stop the Funnification”, the noun fun arose in the late seventeenth century and was labeled by Samuel Johnson in the mid-1800s “as ‘a low cant word’ of the criminal underworld.” But the earliest citation for fun is as a verb, fourteen years earlier.

As Merriam-Webster’s Dictionary of English Usage
notes, “A couple [of usage commentators] who dislike it themselves still note how nouns have a way of turning into adjectives in English.” Indeed, this sort of functional shift—also called zero derivation or conversion by linguists because they change the part of speech without the means of prefixation or suffixation—is quite common in English. English lacks case endings and has little in the way of verbal endings, so it’s quite easy to change a word from one part of speech to another. The transformation of fun from a verb to a noun to an inflected adjective came slowly but surely.

As this great article explains, shifts in function or meaning usually happen in small steps. Once fun was established as a noun, you could say things like We had fun. This is unambiguously a noun—fun is the object of the verb have. But then you get constructions like The party was fun. This is structurally ambiguous—both nouns and adjectives can go in the slot after was.

This paves the way to analyze fun as an adjective. It then moved into attributive use, directly modifying a following noun, as in fun fair. Nouns can do this too, so once again the structure was ambiguous, but it was evidence that fun was moving further in the direction of becoming an adjective. In the twentieth century it started to be used in more unambiguously adjectival roles. MWDEU says that this accelerated after World War II, and Mark Davies COHA shows that it especially picked up in the last twenty years.

Once fun was firmly established as an adjective, the inflected forms funner and funnest followed naturally. There are only a handful of hits for either in COCA, which attests to the fact that they’re still fairly new and relatively colloquial. But let’s get back to Altenburg’s post.

She says that fun is defined as a noun and thus can’t be inflected for comparative or superlative forms, but then she admits that dictionaries also define fun as an adjective with the forms funner and funnest. But she waves away these definitions by saying, “However, dictionaries are starting to include more definitions for slang that are still not words to the true copyeditor.”

What this means is that she really isn’t objecting to funner on grammatical grounds (at least not in the technical sense); her argument simply reduces to an assertion that funner isn’t a word. But as Stan Carey so excellently argued, “‘Not a word’ is not an argument”. And even the grammatical objections are eroding; many people now simply assert that funner is wrong, even if they accept fun as an adjective, as Grammar Girl says here:

Yet, even people who accept that “fun” is an adjective are unlikely to embrace “funner” and “funnest.” It seems as if language mavens haven’t truly gotten over their irritation that “fun” has become an adjective, and they’ve decided to dig in their heels against “funner” and “funnest.”

It brings to mind the objection against sentential hopefully. Even though there’s nothing wrong with sentence adverbs or with hopefully per se, it was a new usage that drew the ire of the mavens. The grammatical argument against it was essentially a post hoc justification for a ban on a word they didn’t like.

The same thing has happened with funner. It’s perfectly grammatical in the sense that it’s a well-formed, meaningful word, but it’s fairly new and still highly informal and colloquial. (For the record, it’s not slang, either, but that’s a post for another day.) If you don’t want to use it, that’s your right, but stop saying that it’s not a word.

By

It’s All Grammar—So What?

It’s a frequent complaint among linguists that laypeople use the term grammar in such a loose and unsystematic way that it’s more or less useless. They say that it’s overly broad, encompassing many different types of rules, and that it allows people to confuse things as different as syntax and spelling. They insist that spelling, punctuation, and ideas such as style or formality are not grammar at all, that grammar is really just the rules of syntax and morphology that define the language.

Arnold Zwicky, for instance, has complained that grammar as it’s typically used refers to nothing more than a “grab-bag of linguistic peeve-triggers”. I think this is an overly negative view; yes, there are a lot of people who peeve about grammar, but I think that most people, when they talk about grammar, are thinking about how to say things well or correctly.

Some people take linguists’ insistence on the narrower, more technical meaning of grammar as a sign of hypocrisy. After all, they say, with something of a smirk, shouldn’t we just accept the usage of the majority? If almost everyone uses grammar in a broad and vague way, shouldn’t we consider that usage standard? Linguists counter that this really is an important distinction, though I think it’s fair to say that they have a personal interest here; they teach grammar in the technical sense and are dismayed when people misunderstand what they do.

I’ve complained about this myself, but I’m starting to wonder whether it’s really something to worry about. (Of course, I’m probably doubly a hypocrite, what with all the shirts I sell with the word grammar on them.) After all, we see similar splits between technical and popular terminology in a lot of other fields, and they seem to get by just fine.

Take the terms fruit and vegetable, for instance. In popular use, fruits are generally sweeter, while vegetables are more savory or bitter. And while most people have probably heard the argument that tomatoes are actually fruits, not vegetables, they might not realize that squash, eggplants, peppers, peas, green beans, nuts, and grains are fruits too, at least by the botanical definition. And vegetable doesn’t even have a botanical definition—it’s just any part of a plant (other than fruits or seeds) that’s edible. It’s not a natural class at all.

In a bit of editorializing, the Oxford English Dictionary adds this note after its first definition of grammar:

As above defined, grammar is a body of statements of fact—a ‘science’; but a large portion of it may be viewed as consisting of rules for practice, and so as forming an ‘art’. The old-fashioned definition of grammar as ‘the art of speaking and writing a language correctly’ is from the modern point of view in one respect too narrow, because it applies only to a portion of this branch of study; in another respect, it is too wide, and was so even from the older point of view, because many questions of ‘correctness’ in language were recognized as outside the province of grammar: e.g. the use of a word in a wrong sense, or a bad pronunciation or spelling, would not have been called a grammatical mistake. At the same time, it was and is customary, on grounds of convenience, for books professedly treating of grammar to include more or less information on points not strictly belonging to the subject.

There are a few points here to consider. The definition of grammar has not been solely limited to syntax and morphology for many years. Once it started branching out into notions of correctness, it made sense to treat grammar, usage, spelling, and pronunciation together. From there it’s a short leap to calling the whole collection grammar, since there isn’t really another handy label. And since few people are taught much in the way of syntax and morphology unless they’re majoring in linguistics, it’s really no surprise that the loose sense of grammar predominates. I’ll admit, however, that it’s still a little exasperating to see lists of grammar rules that everyone gets wrong that are just spelling rules or, at best, misused words.

The root of the problem is that laypeople use words in ways that are useful and meaningful to them, and these ways don’t always jibe with scientific facts. It’s the same thing with grammar; laypeople use it to refer to language rules in general, especially the ones they’re most conscious of, which tend to be the ones that are the most highly regulated—usage, spelling, and style. Again, issues of syntax, morphology, semantics, usage, spelling, and style don’t constitute a natural class, but it’s handy to have a word that refers to the aspects of language that most people are conscious of and concerned with.

I think there still is a problem, though, and it’s that most people generally have a pretty poor understanding of things like syntax, morphology, and semantics. Grammar isn’t taught much in schools anymore, so many people graduate from high school and even college without much of an understanding of grammar beyond spelling and mechanics. I got out of high school without knowing anything more advanced than prepositional phrases. My first grammar class in college was a bit of a shock, because I’d never even learned about things like the passive voice or dependent clauses before that point, so I have some sympathy for those people who think that grammar is mostly just spelling and punctuation with a few minor points of usage or syntax thrown in.

So what’s the solution? Well, maybe I’m just biased, but I think it’s to teach more grammar. I know this is easier said than done, but I think it’s important for people to have an understanding of how language works. A lot of people are naturally interested in or curious about language, and I think we do those students a disservice if all we teach them is never to use infer for imply and to avoid the passive voice. Grammar isn’t just a set of rules telling you what not to do; it’s also a fascinatingly complex and mostly subconscious system that governs the singular human gift of language. Maybe we just need to accept the broader sense of grammar and start teaching people all of what it is.

Addendum: I just came across a blog post criticizing the word funner as bad grammar, and my first reaction was “That’s not grammar!” It’s always easier to preach than to practice, but my reaction has me reconsidering my laissez-faire attitude. While it seems handy to have a catch-all term for language errors, regardless of what type they are, it also seems handy—probably more so—to distinguish between violations of the regulative rules and constitutive rules of language. But this leaves us right where we started.

By

The Data Is In, pt. 2

In the last post, I said that the debate over whether data is singular or plural is ultimately a question of how we know whether a word is singular or plural, or, more accurately, whether it is count or mass. To determine whether data is a count or a mass noun, we’ll need to answer a few questions. First—and this one may seem so obvious as to not need stating—does it have both singular and plural forms? Second, does it occur with cardinal numbers? Third, what kinds of grammatical agreement does it trigger?

Most attempts to settle the debate point to the etymology of the word, but this is an unreliable guide. Some words begin life as plurals but become reanalyzed as singulars or vice versa. For example, truce, bodice, and to some extent dice and pence were originally plural forms that have been made into singulars. As some of the posts I linked to last time pointed out, agenda was also a Latin plural, much like data, but it’s almost universally treated as a singular now, along with insignia, opera, and many others. On the flip side, cherries and peas were originally singular forms that were reanalyzed as plurals, giving rise to the new singular forms cherry and pea.

So obviously etymology alone cannot tell us what a word should mean or how it should work today, but then again, any attempt to say what a word ought mean ultimately rests on one logical fallacy or another, because you can’t logically derive an ought from an is. Nevertheless, if you want to determine how a word really works, you need to look at real usage. Present usage matters most, but historical usage can also shed light on such problems.

Unfortunately for the “data is plural” crowd, both present and historical usage are far more complicated than most people realize. The earliest citation in the OED for either data or datum is from 1630, but it’s just a one-word quote, “Data.” The next citation is from 1645 for the plural count noun “datas” (!), followed by the more familiar “data” in 1646. The singular mass noun appeared in 1702, and the singular count noun “datum” didn’t appear until 1737, roughly a century later. Of course, you always have to take such dates with a grain of salt, because any of them could be antedated, but it’s clear that even from the beginning, data‘s grammatical number was in doubt. Some writers used it as a plural, some used it as a singular with the plural form “datas”, and apparently no one used its purported singular form “datum” for another hundred years.

It appears that historical English usage doesn’t help much in settling the matter, though it does make a few things clear. First, there has been considerable variation in the perceived number of data (mass, singular count, or plural count) for over 350 years. Second, the purported singular form, datum, was apparently absent from English for almost a hundred years and continues to be relatively rare today. In fact, in Mark Davies’ COCA, “data point” slightly outnumbers “datum”, and most of the occurrences of “datum” are not the traditional singular form of data but other specialized uses. This is the first strike against data as a plural; count nouns are supposed to have singular forms, though there are a handful of words known as pluralia tantum, which occur only in the plural. I’ll get to that later.

So data doesn’t really seem to have a singular form. At least you can still count data, right? Well, apparently not. Nearly all of the hits in COCA for “[mc*] data” (meaning a cardinal number followed by the word data) are for things like “two data sets” or “74 data points”. It seems that no one who uses data as a plural count noun ever bothers to count their data, or when they do, they revert to using “data” as a mass noun to modify a normal count noun like “points”. Strike two, and this is a big one. The Cambridge Grammar of the English Language gives use with cardinal numbers as the primary test of countability.

Data does better when it comes to grammatical agreement, though this is not as positive as it may seem. It’s easy enough to find constructions like as these few data show, but it’s just as easy to find constructions like there is very little data. And when the word fails the first two tests, the results here seem suspect. Aren’t people simply forcing the word data to behave like a plural count noun? As this wonderfully thorough post by Norman Gray points out (seriously, read the whole thing), “People who scrupulously write ‘data’ as a plural are frequently confused when it comes to more complicated sentences”, writing things like “What is HEP data? The data themselves…”. The urge to treat data as a singular mass noun—because that’s how it behaves—is so strong that it takes real effort to make it seem otherwise.

It seems that if data really is a plural noun, it’s a rather defective one. As I mentioned earlier, it’s possible that it’s some sort of plurale tantum, but even this conclusion is unsatisfying.
Many pluralia tantum in English are words that refer to things made of two halves, like scissors or tweezers, but there are others like news or clothes. You can’t talk about one new or one clothe (though clothes was originally the plural of cloth). You also usually can’t talk about numbers of such things without using an additional counting word or paraphrasing. Thus we have news items or articles of clothing.

Similarly, you can talk about data points or points of data, but at best this undermines the idea that data is an ordinary plural count noun. But language is full of exceptions, right? Maybe data is just especially exceptional. After all, as Robert Lane Green said in this post, “We have a strong urge to just have language behave, but regular readers of this column know that, as the original Johnson knew, it just won’t.”

I must disagree. The only thing that makes data exceptional is that people have gone to such great lengths to try to get it to act like a plural, but it just isn’t working. Its irregularity is entirely artificial, and there’s no purpose for it except a misguided loyalty to the word’s Latin roots. I say it’s time to stop the act and just let the word behave—as a mass noun.