Arrant Pedantry

By

Why Teach Grammar?

Today is National Grammar Day, and I’ve been thinking a lot lately about what grammar is and why we study it. Last week in the Atlantic, Michelle Navarre Cleary wrote that we should do away with diagramming sentences and other explicit grammar instruction. Her argument, in a nutshell, is that grammar instruction not only doesn’t help students write better, but it actually teaches them to hate writing.

It’s really no surprise—as an editor and a student of language, I’ve run into a lot of people who never learned the difference between a preposition and a participle and are insecure about their writing or their speech. I once had a friend who was apparently afraid to talk to me because she thought I was silently correcting everything she said. When I found out about it, I reassured her that I wasn’t; not only had I never noticed anything wrong with the way she talked, but I don’t worry about correcting people unless they’re paying me for it. But I worried that this was how people saw me: a know-it-all jerk who silently judged everyone else for their errors. I love language, and it saddened me to think that there are people who find it not fascinating but frustrating.

But given the state of grammar instruction in the United States today, it’s not hard to see why a lot of people feel this way. I learned hardly any sentence diagramming until I got to college, and my public school education in grammar effectively stopped in eighth or ninth grade when I learned what a prepositional phrase was. In high school, our grammar work consisted of taking sentences like “He went to the store” and changing them to “Bob went to the store” (because you can’t use he without an antecedent; never mind that such a sentence would not occur in isolation and would surely make sense in context).

Meanwhile, many students are marked down on their papers for supposed grammar mistakes (which are usually matters of spelling, punctuation, or style): don’t use contractions, don’t start a sentence with conjunctions, don’t use any form of the verb be, don’t write in the first person, don’t refer to yourself in the third person, don’t use the passive voice, and on and on. Of course most students are going to come out of writing class feeling insecure. They’re punished for failing to master rules that don’t make sense.

And it doesn’t help that there’s often a disconnect between what the rules say good writing is and what it actually is. Good writing breaks these rules all the time, and following all the rules does little if anything to make bad writing good. We know the usual justifications: students have to master the basics before they can become experts, and once they become experts, they’ll know when it’s okay to break the rules.

But these justifications presuppose that teaching students not to start a sentence with a conjunction or not to use the passive voice has something to do with good writing, when it simply doesn’t. I’ve said before that we don’t consider whether we’re giving students training wheels or just putting sticks in their spokes. Interestingly, Cleary uses a similar argument in her Atlantic piece: “Just as we teach children how to ride bikes by putting them on a bicycle, we need to teach students how to write grammatically by letting them write.”

I’m still not convinced, though, that learning grammar has much at all to do with learning to write. Having a PhD in linguistics doesn’t mean you know how to write well, and being an expert writer doesn’t mean you know anything about syntax and morphology beyond your own native intuition. And focusing on grammar instruction may distract from the more fundamental writing issues of rhetoric and composition. So why worry about grammar at all if it has nothing to do with good writing? Language Log’s Mark Liberman said it well:

We don’t put chemistry into the school curriculum because it will make students better cooks, or even because it might make them better doctors, much less because we need a relatively small number of professional chemists. We believe (I hope) that a basic understanding of atoms and molecules is knowledge that every citizen of the modern world should have.

It may seem like a weak defense in a world that increasingly focuses on marketable skills, but it’s maybe the best justification we have. Language is amazing; no other animal has the capacity for expression that we do. Language is so much more than a grab-bag of peeves and strictures to inflict on freshman writing students; it’s a fundamental part of who we are as a species. Shouldn’t we expect an educated person to know something about it?

So yes, I think we should teach grammar, not because it will help people write better, but simply because it’s interesting and worth knowing about. But we need to recognize that it doesn’t belong in the same class as writing or literature; though it certainly has connections to both, linguistics is a separate field and should be treated as such. And we need to teach grammar not as something to hate or even as something to learn as a means to an end, but as a fascinating and complex system to be discovered and explored for its own sake. In short, we need to teach grammar as something to love.

By

Relative Pronoun Redux

A couple of weeks ago, Geoff Pullum wrote on Lingua Franca about the that/which rule, which he calls “a rule which will live in infamy”. (For my own previous posts on the subject, see here, here, and here.) He runs through the whole gamut of objections to the rule—that the rule is an invention, that it started as a suggestion and became canonized as grammatical law, that it has “an ugly clutch of exceptions”, that great writers (including E. B. White himself) have long used restrictive which, and that it’s really the commas that distinguish between restrictive and nonrestrictive clauses, as they do with other relative pronouns like who.

It’s a pretty thorough deconstruction of the rule, but in a subsequent Language Log post, he despairs of converting anyone, saying, “You can’t talk people out of their positions on this; they do not want to be confused with facts.” And sure enough, the commenters on his Lingua Franca post proved him right. Perhaps most maddening was this one from someone posting as losemygrip:

Just what the hell is wrong with trying to regularize English and make it a little more consistent? Sounds like a good thing to me. Just because there are inconsistent precedents doesn’t mean we can’t at least try to regularize things. I get so tired of people smugly proclaiming that others are being officious because they want things to make sense.

The desire to fix a problem with the language may seem noble, but in this case the desire stems from a fundamental misunderstanding of the grammar of relative pronouns, and the that/which rule, rather than regularizing the language and making it a little more consistent, actually introduces a rather significant irregularity and inconsistency. The real problem is that few if any grammarians realize that English has two separate systems of relativization: the wh words and that, and they work differently.

If we ignore the various prescriptions about relative pronouns, we find that the wh words (the pronouns who/whom/whose and which, the adverbs where, when, why, whither, and whence, and the where + preposition compounds) form a complete system on their own. The pronouns who and which distinguish between personhood or animacy—people and sometimes animals or other personified things get who, while everything else gets which. But both pronouns function restrictively and nonrestrictively, and so do most of the other wh relatives. (Why occurs almost exclusively as a restrictive relative adverb after reason.)

With all of these relative pronouns and adverbs, restrictiveness is indicated with commas in writing or a small pause in speech. There’s no need for a lexical or morphological distinction to show restrictiveness with who or where or any of the others—intonation or punctuation does it all. There are a few irregularities in the system—for instance, which has no genitive form and must use whose or of which, and who declines for cases while which does not—but on the whole it’s rather orderly.

That, on the other hand, is a system all by itself, and it’s rather restricted in its range. It only forms restrictive relative clauses, and then only in a narrow range of syntactic constructions. It can’t follow a preposition (the book of which I spoke rather than *the book of that I spoke) or the demonstrative that (they want that which they can’t have rather than *they want that that they can’t have), and it usually doesn’t occur after coordinating conjunctions. But it doesn’t make the same personhood distinction that who and which do, and it functions as a relative adverb sometimes. In short, the distribution of that is a subset of the distribution of the wh words. They are simply two different ways to make relative clauses, one of which is more constrained.

Proscribing which in its role as a restrictive relative where it overlaps with that doesn’t make the system more regular—it creates a rather strange hole in the middle of the wh relative paradigm and forces speakers to use a word from a completely different paradigm instead. It actually makes the system irregular. It’s a case of missing the forest for the trees. Grammarians have looked at the distribution of which and that, misunderstood it, and tried to fix it based on their misunderstanding. But if they’d step back and look at the system as a whole, they’d see that the problem is an imagined one. If you think the system doesn’t make sense, the solution isn’t to try to hammer it into something that does make sense; the solution is to figure out what kind of sense it makes. And it makes perfect sense as it is.

I’m sure, as Professor Pullum was, that I’m not going to make a lot of converts. I can practically hear copy editors’ responses: But following the rule doesn’t hurt anything! Some readers will write us angry letters if we don’t follow it! It decreases ambiguity! To the first I say, of course it hurts, in that it has a cost that we blithely ignore: every change a copy editor makes takes time, and that time costs money. Are we adding enough value to the works we edit to recoup that cost? I once saw a proof of a book wherein the proofreader had marked every single restrictive which—and there were four or five per page—to be changed to that. How much time did it take to mark all those whiches for two hundred or more pages? How much more time would it have taken for the typesetter to enter those corrections and then deal with all the reflowed text? I didn’t want to find out the answer—I stetted every last one of those changes. Furthermore, the rule hurts all those who don’t follow it and are therefore judged as being sub-par writers at best or idiots at worst, as Pullum discussed in his Lingua Franca post.

To the second response, I’ve said before that I don’t believe we should give so much power to the cranks. Why should they hold veto power for everyone else’s usage? If their displeasure is such a problem, give me some evidence that we should spend so much time and money pleasing them. Show me that the economic cost of not following the rule in print is greater than the cost of following it. But stop saying that we as a society need to cater to this group and assuming that this ends the discussion.

To the last response: No, it really doesn’t. Commas do all the work of disambiguation, as Stan Carey explains. The car which I drive is no more ambiguous than The man who came to dinner. They’re only ambiguous if you have no faith in the writer’s or editor’s ability to punctuate and thus assume that there should be a comma where there isn’t one. But requiring that in place of which doesn’t really solve this problem, because the same ambiguity exists for every other relative clause that doesn’t use that. Note that Bryan Garner allows either who or that with people; why not allow either which or that with things? Stop and ask yourself how you’re able to understand phrases like The house in which I live or The woman whose hair is brown without using a different word to mark that it’s a restrictive clause. And if the that/which rule really is an aid to understanding, give me some evidence. Show me the results of an eye-tracking study or fMRI or at least a well-designed reading comprehension test geared to show the understanding of relative clauses. But don’t insist on enforcing a language-wide change without some compelling evidence.

The problem with all the justifications for the rule is that they’re post hoc. Someone made a bad analysis of the English system of relative pronouns and proposed a rule to tidy up an imagined problem. Everything since then has been a rationalization to continue to support a flawed rule. Mark Liberman said it well on Language Log yesterday:

This is a canonical case of a self-appointed authority inventing a grammatical theory, observing that elite writers routinely violate the theory, and concluding not that the theory is wrong or incomplete, but that the writers are in error.

Unfortunately, this is often par for the course with prescriptive rules. The rule is taken a priori as correct and authoritative, and all evidence refuting the rule is ignored or waved away so as not to undermine it. Prescriptivism has come a long way in the last century, especially in the last decade or so as corpus tools have made research easy and data more accessible. But there’s still a long way to go.

Update: Mark Liberman has a new post on the that/which rule which includes links to many of the previous Language Log posts on the subject.

By

What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella”, the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot”, “A Bad Week for Joan Acocella”, “Daddy, Are Prescriptivists Real?”, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.

%d bloggers like this: