Arrant Pedantry


Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.


Completion Successful

The other day I added some funds to my student card and saw a familiar message: “Your Deposit Completed Successfully!” I’ve seen the similar message “Completion successful” on gas pumps after I finish pumping gas. These messages seem perfectly ordinary at first glance, but the more I thought about them, the more I realized how odd they are. Though they’re intended as concise messages to let me know that everything worked the way it was supposed to, I had to wonder what it meant for a completion to be successful.

The first question is, what is it that’s being completed? Obviously it must be the transaction. But rather than describing the transaction as successful or unsuccessful, it describes the act of completing the transaction as such. So is it possible to separate the notions of completion and success? In my mind, the fact that the transaction is complete means that it was successful, and vice versa. An incomplete transaction would be unsuccessful. After all, if the transaction were incomplete or unsuccessful, it certainly wouldn’t give me a message like “Completion unsuccessful” or, worse yet, “Incompletion successful”.

So saying that the completion is successful is really just another way of saying that the transaction is complete. But as a consumer, I don’t really care that the abstract act of completing the transaction is successful—I just care that the transaction is complete. The message takes what I care about (the completion), nominalizes it, and reports on the status of the nominalization instead.

What I can’t figure out is why the messages would frame the status of the transaction in such an odd way. Perhaps it’s a case of what Geoffrey Pullum calls nerdview, which is when experts frame public language in a way that makes sense to them but that seems odd or nonsensical to laypeople. Perhaps from the perspective of the company processing the credit card transaction, there’s a difference between the completeness of the transaction and its success. I don’t know—I don’t work at a bank, and I don’t care enough to research how credit card transactions are processed. I just want to put some money on my student card so I can buy some donuts from the vending machine.


Who, That, and the Nature of Bad Rules

A couple of weeks ago the venerable John E. McIntyre blogged about a familiar prescriptive bugbear, the question of that versus who(m). It all started on the blog of the Society for the Promotion of Good Grammar, where a Professor Jacoby, a college English professor, wrote in to share his justification for the rule, which is that you should avoid using that which human referents because it depersonalizes them. He calls this justification “quite profound,” which is probably a good sign that it’s not. Mr. McIntyre, ever the reasonable fellow, tried to inject some facts into the conversation, but apparently to no avail.

What I find most interesting about the whole discussion, however, is not the argument over whether that can be used with human referents, but what the whole argument says about prescriptivism and the way we talk about language and rules. (Indeed, the subject has already been covered very well by Gabe Doyle at Motivated Grammar, who made some interesting discoveries about relative pronoun usage that may indicate some cognitive motivation.) Typically, the person putting forth the rule assumes a priori that the rule is valid, and thereafter it seems that no amount of evidence or argument can change their mind. The entire discussion at the SPOGG blog proceeds without any real attempts to address Mr. McIntyre’s points, and it ends with the SPOGG correspondent who originally kicked off the discussion sullenly taking his football and going home.

James Milroy, an emeritus professor of sociolinguistics at the University of Michigan, once wrote that all rationalizations for prescriptions are post hoc; that is, the rules are taken to be true, and the justifications come afterward and really only serve to give the rule the illusion of validity:

Indeed all prescriptive arguments about correctness that depend on intra-linguistic factors are post-hoc rationalizations. . . . But an intra-linguistic rationalization is not the reason why some usages are believed to be wrong. The reason is that it is simply common sense: everybody knows it, it is part of the culture to know it, and you are an outsider if you think otherwise: you are not a participant in the common culture, and so your views can be dismissed. To this extent, linguists who state that I seen it is not ungrammatical are placing themselves outside the common culture.1James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.

This may sound like a rather harsh description of prescriptivism, but I think there’s a lot of truth to it—especially the part about linguists unwittingly setting themselves outside of the culture. Linguists try to play the part of the boy who pointed out that the emperor has no clothes, but instead of breaking the illusion they are at best treated as suspect for not playing along. But the point linguists are trying to make isn’t that there’s no such thing as right or wrong in language (though there are some on the fringe who would make such claims)—they’re simply trying to point out that, quite frequently, the justifications are phony and attention to facts and evidence is mostly nonexistent. There are no real axioms or first principles from which prescriptive rules follow—at least, there don’t seem to be any that are consistently applied and followed to their logical conclusions. Instead the canon of prescriptions is a hodgepodge of style and usage opinions that have been passed down and are generally assumed to have the force of law. There are all kinds of unexamined assumptions packaged into prescriptions and their justifications, such as the following from Professor Jacoby:

  • Our society has a tendency to depersonalize people.
  • Depersonalizing people is bad.
  • Using that as a relative pronoun with human referents depersonalizes them.

There are probably more, but that covers the bases. Note that even if we agree that our society depersonalizes people and that this is a bad thing, it’s still quite a leap from this to the claim that that depersonalizes people. But, as Milroy argued, it’s not really about the justification. It’s about having a justification. You can go on until you’re blue in the face about the history of English relative pronoun usage (for instance, that demonstrative pronouns like that were the only option in Old English, and that this has changed several times over the last millennium and a half, and that it’s only recently that people have begun to claim that that with people is wrong) or about usage in other, related languages (such as German, which uses demonstrative pronouns as relative pronouns), but it won’t make any difference; at best, the person arguing for the rule will superficially soften their stance and make some bad analogies to fashion or ethics, saying that while it might not be a rule, it’s still a good guideline, especially for novices. After all, novices need rules that are more black and white—they need to use training wheels for a while before they can ride unaided. Too bad we also never stop to ask whether we’re actually providing novices with training wheels or just putting sticks in their spokes.

Meanwhile, prescriptivists frequently dismiss all evidence for one reason or another: It’s well established in the history of usage? Well, that just shows that people have always made mistakes. It’s even used by greats like Chaucer, Shakespeare, and other literary giants? Hey, even the greats make mistakes. Either that or they mastered the rules and thus know when it’s okay to break them. People today overwhelmingly break the rule? Well, that just shows how dire the situation is. You literally can’t win, because, as Geoffrey Pullum puts it, “nothing is relevant.”

So if most prescriptions are based on unexamined assumptions and post hoc rationalizations, where does that leave things? Do we throw it all out because it’s a charade? That seems rather extreme. There will always be rules, because that’s simply the nature of people. The question is, how do we establish which rules are valid, and how do we teach this to students and practice it as writers and editors? Honestly, I don’t know, but I know that it involves real research and a willingness to critically evaluate not only the rules but also the assumptions that underlie them. We have to stop having a knee-jerk reaction against linguistic methods and allow them to inform our understanding. And linguists need to learn that rules are not inherently bad. Indeed, as John Algeo put it, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense.”2“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.

Notes   [ + ]

1. James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.
2. “Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.


Do You Agree That We Ask for Your Consent?

I just finished filing my federal taxes with H&R Block’s free e-filing (which I highly recommend, by the way), and at the end I encountered some rather confusing language. After submitting my return, I came to a page asking if I consented to let H&R Block use my information for marketing purposes. (I always wonder who explicitly consents to such things—who honestly says, “Yes, please try to sell me more of your tax-related products and services!”?) Unfortunately, I can’t get back to the page now, so I’ll have to reconstruct it from memory.

At the top it explained that they were requesting permission to use the information provided in my return to inform me of other stuff that I might be interested in purchasing from them. Then there was a paragraph saying something like “I, Jonathon, hereby consent to blah blah blah.” Next to this paragraph there was a check box. I took this to mean that by checking the box, I was allowing them to use my information. By leaving it unchecked, I was not. Pretty clear and straightforward so far.

Below this paragraph were two buttons, labelled “I Disagree” and “I Agree”, respectively. And here I paused for a little while, trying to figure out what exactly I was potentially agreeing or disagreeing with. Was I agreeing or disagreeing with the entire process of giving or not giving my consent? But the whole process was essentially an implicit question—can we use your information to try to sell you stuff?—and you can’t agree or disagree with a question, because it has no truth value to either confirm or deny. And anyway, if you could disagree with it, you’d just be agreeing to answer the question in the negative by refusing to answer it in the affirmative. I thought that perhaps I was reading it a little too literally, but I asked my wife what she thought about it, and she was similarly perplexed.

I finally figured out what they were really after when I moused over each button to see what appeared in my browser’s status bar. The disagree button had something about withholding consent or whatnot, so I decided that that was the option I wanted. In other words, it appears that the check box was entirely superfluous (though maybe it wasn’t—I don’t actually know what would have happened if I’d checked it and clicked “I Disagree” or left it unchecked and clicked “I Agree”), and the buttons were providing the wrong answers to the implicit question being asked. Of course, “I Agree” could have worked if it had not been answering an implicit question but rather a proposed course of action: “I agree to give my consent.” However, this does not work in the negative, producing the ungrammatical *I disagree to give my consent.

This problem wasn’t quite as troublesome as Geoffrey Pullum’s latest run-in with bad interfaces, but the basic problem is the same: the buttons don’t make a lick of sense by themselves because of fundamental breakdowns in semantics, and the user is left with no recourse but to take a stab at it and hope they got it right.


Editing Chicago

Those who have worked with me before may remember that I was once nicknamed “The Index to The Chicago Manual of Style” (or just “The Index” for short) because I always knew where to find everything that anyone needed to look up. I’ve always been a fan of the big orange book. It is so painstakingly thorough, so comprehensive, so detailed—what’s not to like? But I must admit that I was rather disappointed with the new chapter on grammar and usage in the fifteenth edition.

In theory it sounded like a great addition. However, when I recieved my copy and started flipping through it, I quickly realized that the new chapter was marginally helpful at best and outright incorrect at worst, though most of it settled comfortably on the middle ground of merely useless.

One passage in particular caught my attention and just about made my eyes bug out when I read it. For those of you who would like to follow along at home, it’s section 5.113:

Progressive conjugation and voice. If an inflected form of to be is joined with the verb’s present participle, a progressive conjugation is produced {the ox is pulling the cart}. The progressive conjugation is in active voice because the subject is performing the action, not being acted on.

Anyone who knows their grammar should know that a construction can be both progressive and passive; the two are not mutually exclusive. And anyone who knows how to spot a passive construction should realize that the section illustrates how wrong it is with the last three words, “being acted on.”

You see, while it is not technically a passive, but rather a pseudo-passive,* it shows that you can take an inflected form of be, in this case “is,” followed by a present participle, “being,” followed by a past participle, “acted.” Voilà! You have a passive progressive. I wrote the Chicago staff a nice e-mail saying that maybe I had misunderstood, but it seemed to me that there was a contradiction here. Here’s what they wrote back:

Yes, I think perhaps you are misunderstanding the point here. Section 5.113 seeks to prevent an inaccurate extension of 5.112, which states that “the passive voice is always formed by joining an inflected form of to be (or, in colloquial usage, to get) with the verb’s past participle.” In 5.113, CMS points out that phrases like “the subject is not being acted on,” which might look passive, are actually constructed with a present participle, rather than a past participle, and are active in voice. (Note that the subject—the word “subject”—is performing the action of not being; this is active, not passive.)

Thank you for writing


I wrote back to try to explain myself in more detail and to note that the staff member wasn’t analyzing the verb phrase as a whole. I even cited a web page from Purdue University’s Online Writing Lab. Notice the second example. Here’s their response:

Well, I’ve done my best to defend Mr. Garner’s take on the subject, but I’ll be happy to add your letter to our file of suggested corrections and additions to CMS. If you wish to explore this question further, you might take the matter up with experts at grammar Web sites and help pages. Meanwhile, please write us again if you have a question about Chicago style. –Staff

Apparently the creators of the Purdue University Online Writing Lab don’t count as experts at a grammar Web site. Well, I did my best to explain why Mr. Garner’s take on the subject was wrong. I just hope that someday the section gets fixed.

%d bloggers like this: