Arrant Pedantry

By

New Shirts, New Old Posts

Good news, everyone! I have a new T-shirt design inspired by that one movie featuring the popular interlocking brick system.

371130_1002686190_editingisawesomefinal_orig

Head over to the Arrant Pedantry Store to take a look.

I’ve also moved a couple of posts over here from a now-defunct site. When I finished grad school a couple of years ago, my wife and I launched a new site for our freelance editing endeavors, and shortly thereafter I got a full-time job. Though the site is gone, I wanted to keep our blog posts (all two of them) online, so you can now find them here.

Why You Need an Editor (by me)

Accepting and Rejecting Changes in Microsoft Word (by my wife, Ruth)

By

Accepting and Rejecting Changes in Microsoft Word

As many of you probably know, editors usually use Microsoft Word’s Tracking Changes feature to mark their editing changes. The days of writing in red pen all over hard copies of documents are largely gone. What this means is that authors and editors can communicate by email, sending versions of the document back and forth until it’s complete. The author can view the editor’s changes and use a tool in Word to either accept or reject these changes. Whether you already know how to use Tracking Changes or not, I hope you’ll learn something in this tutorial. For the purposes of this post, let’s assume that you are the author and that you have just gotten a marked-up draft back from your editor.

Using Tracking Changes

First the basics. (If you know how to use Tracking Changes, go ahead and skip to the Time-Saving Trick heading.) When you open your document, you will notice colored changes—usually red—throughout your document. Anything added is underlined, anything deleted is struck through, and any formatting changes are noted with a dotted line out into the margin, where a little bubble informs you of the change made. Like this:

tracked-changes
Let’s say you notice that a word has been added within your first paragraph, and you’re fine with the editor’s addition of that word. You’ll want to accept the change. There are a few ways to do this:

  • You could right-click the red, underlined word, which would bring up the menu shown below, after which you would click Accept Insertion.
    Note: The menu may also say “Accept Deletion” or “Accept Formatting Change,” depending on the type of change the editor made.

Right-click-accept-reject

  • You could click the word and click the Accept button in the Review tab. Below is an image of the Review tab so you can see where the Accept button is. It is the little white page with a blue check mark on it, to the left of the white page with a red X on it (the Reject button). Both are near the right side of your screen.

Review-Tab1

  • You could click at the beginning of your document and click the Next button to jump to the first change in the document and then click the Accept button. Of these three methods, this one is the fastest.

Next-button

If you disagreed with the addition of the word for some reason, you would click Reject instead of Accept, but the rest of the steps would be the same. The red underlined word would then turn black and lose its underline. It would become a normal part of your document.

Time-Saving Trick

Which brings me to the best trick I have learned for using Tracking Changes. If you have a long document or a document with many changes, clicking Accept hundreds of times can get tedious and time consuming. Here’s what you can do instead.

Start at the beginning of your document. Click the Next button in the Review tab.

Next-button

This will show you the first change tracked in your document. If it’s fine, click Next again. Do not click Accept! If it’s not a change you like, either reject the change using one of the three methods listed above or make a change of your own (your change will be tracked in a different color). Keep clicking Next after viewing each change that is acceptable to you. Because you’re jumping quickly through your document, this shouldn’t take long, and because you’re using the Next button, you’re not accidentally missing any changes. When you get to the end of the document using this method, you can feel confident that you have viewed all changes, that any changes that you didn’t like have been changed so that you’re happy with them, and that everything remaining is ready to be accepted.

If you have not made any changes of your own to the document (other than rejecting changes), click the tiny down arrow below the Accept button. Click Accept All Changes in Document (the bottom one in the drop-down menu). You’re done!

Accept-All

If you made any changes to the document of your own that you still want your editor to review, click Show Markup, just to the left of the Accept button. Click Reviewers, which is at the bottom of the list. You should see both your name (or initials) and your editor’s. For the sake of this tutorial, let’s say I’m Ruth and my editor is Julia.

Reviewers

Uncheck your name (in this case, Ruth). This will hide all the changes you’ve made for the moment.

Then click the tiny down arrow below the Accept button. Click Accept All Changes Shown. If you haven’t hidden any reviewers, this option will be grayed out.

Accept-Changes-Shown

This will accept all of your editor’s changes but none of yours, which is a good thing, because your editor can now review your changes and do what you just did with his or hers. You can then go back to Show Markup > Reviewers and check your name to show your changes again. Your document should look nice and clean, with only a few of your changes remaining for your editor to review.

Depending on the length of your document and the number of changes, this trick could save you hours of reviewing time. I hope it makes the editing process easier for you.

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

Why You Need an Editor

Every writer needs a good editor. It doesn’t matter how good you are, how many years of experience you have, or how meticulous you are; you simply can’t see all of your own mistakes. We all have a blind spot for our own typos and for the weaknesses in our arguments, because we know how the text should read.

Last year I started writing for Copyediting newsletter, and I’ve really appreciated having professional editors review my writing before it’s published. I’d gotten so used to blogging that it was a bit of a shock at first to see things come back covered with marks, but I quickly realized the value in having another pair or two of eyes to look things over. A good editor catches not only typos and other infelicities, but structural problems like poor transitions, unclear arguments, and weak conclusions. My pieces for the newsletter have been stronger for having been edited.

You may already be a great writer. You may even know your style manual of choice forwards and backwards. Or you could be someone with great ideas who needs some extra help translating those ideas into words. Whatever your level of expertise, you can still benefit from having a professional edit your writing. That’s what we’re here for.

By

Now at Visual Thesaurus

In case you haven’t seen it already, I have a a new post up at Visual Thesaurus. It explores the history of toward and towards and specifically looks at copy editors’ role in driving towards out of use in edited American English. It’s only available to subscribers, but the subscription is only $19.95 a year. You get access to a lot of other great features and articles, including more to come from me.

I’ll keep writing here, of course, and I’ll try to get back to a more regular posting schedule now that my thesis is finished. Stay tuned.

By

Take My Commas—Please

Most editors are probably familiar with the rule that commas should be used to set off nonrestrictive appositives and that no commas should be used around restrictive appositives. (In Chicago 16, it’s under 6.23.) A restrictive appositive specifies which of a group of possible referents you’re talking about, and it’s thus integral to the sentence. A nonrestrictive appositive simply provides extra information about the thing you’re talking about. Thus you would write My wife, Ruth, (because I only have one wife) but My cousin Steve (because I have multiple cousins, and one is named Steve). The first tells you that my wife’s name is Ruth, and the latter tells you which of my cousins I’m talking about.

Most editors are probably also familiar with the claim that if you leave out the commas after a phrase like “my wife”, the implication is that you’re a polygamist. In one of my editing classes, we would take a few minutes at the start of each class to share bloopers with the rest of the class. One time my professor shared the dedication of a book, which read something like “To my wife Cindy”. Obviously the lack of a comma implies that he must be a polygamist! Isn’t that funny? Everyone had a good laugh.

Except me, that is. I was vaguely annoyed by this alleged blooper, which required a willful misreading of the dedication. There was no real ambiguity here—only an imagined one. If the author had actually meant to imply that he was a polygamist, he would have written something like “To my third wife, Cindy”, though of course he could still write this if he were a serial monogamist.

Usually I find this insistence on commas a little exasperating, but in one instance the other day, the commas were actually wrong. A proofreader had corrected a caption which read “his wife Arete” to “his wife, Arete,” which probably seemed like a safe change to make but which was wrong in this instance—the man referred to in the caption had three wives concurrently. I stetted the change, but it got me thinking about fact-checking and the extent to which it’s an editor’s job to split hairs.

This issue came up repeatedly during a project I worked on last year. It was a large book with a great deal of biographical information in it, and I frequently came across phrases like “Hans’s daughter Ingrid”. Did Hans have more than one daughter, or was she his only daughter? Should it be “Hans’s daughter, Ingrid,” or “Hans’s daughter Ingrid”? And how was I to know?

Pretty quickly I realized just how ridiculous the whole endeavor was. I had neither the time nor the resources to look up World War II–era German citizens in a genealogical database, and I wasn’t about to bombard the author with dozens of requests for him to track down the information either. Ultimately, it was all pretty irrelevant. It simply made no difference to the reader. I decided we were safe just leaving the commas out of such constructions.

And, honestly, I think it’s even safer to leave the commas out when referring to one’s spouse. Polygamy is such a rarity in our culture that it’s usually highlighted in the text, with wording such as “John and Janet, one of his three wives”. Assuming that “my wife Ruth” implies that I have more than one wife is a deliberate flouting of the cooperative principle of communication. This insistence on a narrow, prescribed meaning over the obvious, intended meaning is a problem with many prescriptive rules, but, once again, that’s a topic for another day.

Please note, however, that I’m not saying that anything goes or that you can punctuate however you want as long as the meaning’s clear. In cases where it’s a safe assumption that there’s just one possible referent, or when it doesn’t really matter, the commas can sometimes seem a little fussy and superfluous.

By

Grammar and Morality

Lately there’s been an article going around titled “The Real George Zimmerman’s Really Bad Grammar”, by Alexander Nazaryan. I’m a week late getting around to blogging about it, but at the risk of wading into a controversial topic with a possibly tasteless post, I wanted to take a closer look at some of the arguments and analyses made in the article.

The first thing that struck me about the article is the explicit moralization of grammar. At the end of the first paragraph, the author, a former English teacher, says that when he forced students to write notes of apology, he explained to them that “good grammar equaled a clean conscience.” (This guy must’ve been a joy to have as a teacher.)

But then the equivocation begins. Although Nazaryan admits that Zimmerman “has bigger concerns than the independent clause”, he nevertheless insists that some of Zimmerman’s errors “are both glaring and inexcusable”. Evidently, quitting one’s job and going into hiding for one’s own safety is no excuse for any degree of grammatical laxness.

Nazaryan’s grammatical analysis leaves something to be desired, too. He takes a quote from Zimmerman’s website—“The only thing necessary for the triumph of evil, is that good men do nothing”—and says, “Why does Zimmerman insert an absolutely needless comma between subject (granted, a complex one) and verb? I can’t speculate on that, but he seems to have treated ‘is that good men do nothing’ as a nonrestrictive clause that adds extra information to the sentence.” This sort of comma, inserted between a complex subject and its verb, used to be completely standard, but it fell out of use in edited writing in the last century or two. It’s still frequently found in unedited writing, however.

I’m not expecting Nazaryan to know the history of English punctuation conventions, but he should at least recognize that this is a thing that a lot of people do, and it’s not for the reason that he suspects. After all, in what sense could the entire predicate of a sentence be a “nonrestrictive clause that adds extra information”? He’s actually got it backwards, in a sense: it’s the complement clause of the subject—“necessary for the triumph of evil”—that’s being set off, albeit with a single, unpaired comma. (And I can’t resist poking fun at the fact that he says “I can’t speculate on that” and the immediately proceeds to speculate on it.)

Nazaryan does make some valid points—that Zimmerman may be overreaching in his prose at times, using words and constructions he hasn’t really mastered—but the whole exercise makes me uncomfortable. (Yes, I have mixed feelings about writing this post myself.) Picking grammatical nits when one man has been killed and another charged with second-degree murder is distasteful enough; equating good grammar with morality makes me squirm.

This is not to say that there is no value in editing, of course. This recent study found that editing contributes to the readers’ perception of the value and professionalism of a story. I did a small study of my own for a class a few years ago and found the same thing. A good edit improves the professional appearance of a story, which may make readers more likely to trust or believe it. However, this does not mean that readers will necessary see an unedited story as a mark of guilt.

Nazaryan makes his thesis most explicit near the end, when he says, “The more I think about this, the more puzzling it becomes. Zimmerman is accused of being a careless vigilante who played fast and loose with the law; why would he want to give credence to that argument by playing fast and loose with the most basic laws of grammar?” I’m sorry, but who in their right minds—who other than Alexander Nazaryan, that is—believes that petty grammatical violations can be taken as a sign of lawless vigilantism?

But wait—there’s still an out. According to Nazaryan, all Zimmerman needs is a good copyeditor. Of course, the man has quit his job and is begging for donations to pay for his legal defense and living expenses, but I guess that’s irrelevant. Obviously he should’ve gotten his priorities straight and paid for a copyeditor first to obtain grammatical—and thereby moral—absolution.

Nazaryan squeezes in one last point at the end, and it’s maybe even more ridiculous than his identification of clean grammar with a clean conscience: “One of the aims of democracy is that citizens are able to articulate their rights in regard to other citizens and the state itself; when one is unable to do so, there is a sense of collective failure—at least for this former teacher.” You see, bad grammar doesn’t just indicate an unclean conscience; it threatens the very foundations of democracy.

I’m feeling a sense of failure too, but for entirely different reasons than Alexander Nazaryan.

By

More on That

As I said in my last post, I don’t think the distribution of that and which is adequately explained by the restrictive/nonrestrictive distinction. It’s true that nearly all thats are restrictive (with a few rare exceptions), but it’s not true that all restrictive relative pronouns are thats and that all whiches are nonrestrictive, even when you follow the traditional rule. In some cases that is strictly forbidden, and in other cases it is disfavored to varying degrees. Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in your analysis.

In researching the topic for this post, I’ve learned a couple of things: (1) I don’t know syntax as well as I should, and (2) the behavior of relatives in English, particularly that, is far more complex than most editors or pop grammarians realize. First of all, there’s apparently been a century-long argument over whether that is even a relative pronoun or actually some sort of relativizing conjunction or particle. (Some linguists seem to prefer the latter, but I won’t wade too deep into that debate.) Previous studies have looked at multiple factors to explain the variation in relativizers, including the animacy of the referent, the distance between the pronoun and its referent, the semantic role of the relative clause, and the syntactic role of the referent.

It’s often noted that that can’t follow a preposition and that it doesn’t have a genitive form of its own (it must use either whose or of which), but no usage guide I’ve seen ever makes mention of the fact that this pattern follows the accessibility hierarchy. That is, in a cross-linguistic analysis, linguists have found an order to the way in which relative clauses are formed. Some languages can only relativize subjects, others can do subjects and verbal objects, yet others can do subjects, verbal objects, and oblique objects (like the objects of prepositions), and so on. For any allowable position on the hierarchy, all positions to the left are also allowable. The hierarchy goes something like this:

subject ≥ direct object ≥ indirect object ≥ object of stranded preposition ≥ object of fronted preposition ≥ possessor noun phrase ≥ object of comparative particle

What is interesting is that that and the wh- relatives, who and which, occupy overlapping but different portions of the hierarchy. Who and which can relativize anything from subjects to possessors and possibly objects of comparative particles, though whose as the genitive form of which seems a little odd to some, and both sound odd if not outright ungrammatical with comparatives, as in The man than who I’m taller. But that can’t relativize objects of fronted prepositions or anything further down the scale.

Strangely, though, there are things that that can do that who and which can’t. That can sometimes function as a sort of relative adverb, equivalent to the relative adverbs why, where, or when or to which with a preposition. That is, you can say The day that we met, The day when we met, or The day on which we met, but not The day which we met. And which can relativize whole clauses (though some sticklers consider this ungrammatical), while that cannot, as in This author uses restrictive “which,” which bothers me a lot.

So what explains the differences between that and which or who? Well, as I mentioned above, some linguists consider that not a pronoun but a complementizer or conjunction (perhaps a highly pronominal one), making it more akin to the complementizer that, as in He said that relativizers were confusing. And some linguists have also proposed different syntactic structures for restrictive and nonrestrictive clauses, which could account for the limitation of that to restrictive clauses. If that is not a true pronoun but a complementizer, then that could account for its strange distribution. It can’t appear in nonrestrictive clauses, because they require a full pronoun like which or who, and it can’t appear after prepositions, because those constructions similarly require a pronoun. But it can function as a relative adverb, which a regular relative pronoun can’t do.

As I argued in my previous post, it seems that which and that do not occupy separate parts of a single paradigm but are part of two different paradigms that overlap. The differences between them can be characterized in a few different ways, but for some reason, grammarians have seized on the restrictive/nonrestrictive distinction and have written off the rest as idiosyncratic exceptions to the rule or as common errors (when they’ve addressed those points at all).

The proposal to disallow which in restrictive relative clauses, except in the cases where that is ungrammatical—sometimes called Fowler’s rule, though that’s not entirely accurate—is based on the rather trivial observation that all thats are restrictive and that all nonrestrictives are which. It then assumes that the converse is true (or should be) and tries to force all restrictives to be that and all whiches to be nonrestrictive (except for all those pesky exceptions, of course).

Garner calls Fowler’s rule “nothing short of brilliant,”[1] but I must disagree. It’s based on a rather facile analysis followed by some terrible logical leaps. And insisting on following a rule based on bad linguistic analysis is not only not helpful to the reader, it’s a waste of editors’ time. As my last post shows, editors have obviously worked very hard to put the rule into practice, but this is not evidence of its utility, let alone its brilliance. But a linguistic analysis that could account for all of the various differences between the two systems of relativization in English? Now that just might be brilliant.

Sources

Herbert F. W. Stahlke, “Which That,” Language 52, no. 3 (Sept. 1976): 584–610
Johan Van Der Auwera, “Relative That: A Centennial Dispute,” Journal of Lingusitics 21, no. 1 (March 1985): 149–79
Gregory R. Guy and Robert Bayley, “On the Choice of Relative Pronouns in English,” American Speech 70, no. 2 (Summer 1995): 148–62
Nigel Fabb, “The Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 26, no. 1 (March 1990): 57–77
Robert D. Borsley, “More on the Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 28, no. 1 (March 1992), 139–48

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”

By

Which Hunting

I meant to blog about this several weeks ago, when the topic came up in my corpus linguistics class from Mark Davies, but I didn’t have time then. And I know the that/which distinction has been done to death, but I thought this was an interesting look at the issue that I hadn’t seen before.

For one of our projects in the corpus class, we were instructed to choose a prescriptive rule and then examine it using corpus data, determining whether the rule was followed in actual usage and whether it varied over time, among genres, or between the American and British dialects. One of my classmates (and former coworkers) chose the that/which rule for her project, and I found the results enlightening.

She searched for the sequences “[noun] that [verb]” and “[noun] which [verb],” which aren’t perfect—they obviously won’t find every relative clause, and they’ll pull in a few non-relatives—but the results serve as a rough measurement of their relative frequencies. What she found is that before about the 1920s, the two were used with nearly equal frequency. That is, the distinction did not exist. After that, though, which takes a dive and that surges. The following chart shows the trends according to Mark Davies’ Corpus of Historical American English and his Google Books N-grams interface.

It’s interesting that although the two corpora show the same trend, Google Books lags a few decades behind. I think this is a result of the different style guides used in different genres. Perhaps style guides in certain genres picked up the rule first, from whence it disseminated to other style guides. And when we break out the genres in COHA, we see that newspapers and magazines lead the plunge, with fiction and nonfiction books following a few decades later, though use of which is apparently in a general decline the entire time. (NB: The data from the first decade or two in COHA often seems wonky; I think the word counts are low enough in those years that strange things can skew the numbers.)

Proportion of "which" by genres

The strange thing about this rule is that so many people not only take it so seriously but slander those who disagree, as I mentioned in this post. Bryan Garner, for instance, solemnly declares—without any evidence at all—that those who don’t follow the rule “probably don’t write very well,” while those who follow it “just might.”[1] (This elicited an enormous eye roll from me.) But Garner later tacitly acknowledges that the rule is an invention—not by the Fowler brothers, as some claim, but by earlier grammarians. If the rule did not exist two hundred years ago and was not consistently enforced until the 1920s or later, how did anyone before that time ever manage to write well?

I do say enforced, because most writers do not consistently follow it. In my research for my thesis, I’ve found that changing “which” to “that” is the single most frequent usage change that copy editors make. If so many writers either don’t know the rule or can’t apply it consistently, it stands to reason that most readers don’t know it either and thus won’t notice the difference. Some editors and grammarians might take this as a challenge to better educate the populace on the alleged usefulness of the rule, but I take it as evidence that it’s just not useful. And anyway, as Stan Carey already noted, it’s the commas that do the real work here, not the relative pronouns. (If you’ve already read his post, you might want to go and check it out again. He’s added some updates and new links to the end.)

And as I noted in my previous post on relatives, we don’t observe a restrictive/nonrestrictive distinction with who(m) or, for that matter, with relative adverbs like where or when, so at the least we can say it’s not a very robust distinction in the language and certainly not necessary for comprehension. As with so many other useful distinctions, its usefulness is taken to be self-evident, but the evidence of its usefulness is less than compelling. It seems more likely that it’s one of those random things that sometimes gets grammaticalized, like gender or evidentiality. (Though it’s not fully grammaticalized, because it’s not obligatory and is not a part of the natural grammar of the language, but is a rule that has to be learned later.)

Even if we just look at that and which, we find a lot of exceptions to the rule. You can’t use that as the object of a preposition, even when it’s restrictive. You can’t use it after a demonstrative that, as in “Is there a clear distinction between that which comes naturally and that which is forced, even when what’s forced looks like the real thing?” (I saw this example in COCA and couldn’t resist.) And Garner even notes “the exceptional which”, which is often used restrictively when the relative clause is somewhat removed from its noun.[2] And furthermore, restrictive which is frequently used in conjoined relative clauses, such as “Eisner still has a huge chunk of stock options—about 8.7 million shares’ worth—that he can’t exercise yet and which still presumably increase in value over the next decade,” to borrow an example from Garner.[3]

Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in its formulation. I’ll explain what I think is going on with that and which in a later post.

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”
  2. [2] S.v. “Remote Relatives. B. The Exceptional which.”
  3. [3] S.v. “which. D. And which; but which..”

By

The Value of Prescriptivism

Last week I asked rather skeptically whether prescriptivism had moral worth. John McIntyre was interested by my question and musing in the last paragraph, and he took up the question (quite admirably, as always) and responded with his own thoughts on prescriptivism. What I see is in his post is neither a coherent principle nor an innately moral argument, as Hart argued, but rather a set of sometimes-contradictory principles mixed with personal taste—and I think that’s okay.

Even Hart’s coherent principle is far from coherent when you break it down. The “clarity, precision, subtlety, nuance, and poetic richness” that he touts are really a bundle of conflicting goals. Clear wording may come at the expense of precision, subtlety, and nuance. Subtlety may not be very clear or precise. And so on. And even if these are all worthy goals, there may be many more that are missing.

McIntyre notes several more goals for practical prescriptivists like editors, including effectiveness, respect for an author’s voice, consistency with a set house style, and consideration of reader reactions, which is a quagmire in its own right. As McIntyre notes, some readers may have fits when they see sentence-disjunct “hopefully”, while other readers may find workarounds like “it is to be hoped that” to be stilted.

Of course, any appeal to the preferences of the reader (which is, in a way, more of a construct than a real entity) still requires decision making: which readers are you appealing to? Many of those who give usage advice seem to defer to the sticklers and pedants, even when it can be shown that they’re pretty clearly wrong or at least holding to outdated and somewhat silly notions. Grammar Girl, for example, guides readers through the arguments for and against “hopefully”, repeatedly saying that she hopes it becomes acceptable someday (note how carefully she avoids using “hopefully” herself, even though she claims to support it) but ultimately shies away from the usage, saying that you should avoid it for now because it’s not acceptable yet. (I’ll write about the strange reasoning presented here some other time.)

But whether or not you give in to the pedants and cranks who write angry letters to lecture you on split infinitives and stranded prepositions, it’s still clear that there’s value in considering the reader’s wishes while writing and editing. The author wants to communicate something to an audience; the audience presumably wants to receive that communication. It’s in both parties’ best interests if that communication goes off without a hitch, which is where prescriptivism can come in.

As McIntyre already said, this doesn’t give you an instant answer to every question, it can give you some methods of gauging roughly how acceptable certain words or constructions are. Ben Yagoda provides his own “somewhat arbitrary metric” for deciding when to fight for a traditional meaning and when to let it go. But the key word here is “arbitrary”; there is no absolute truth in usage, no clear, authoritative source to which you can appeal to solve these questions.

Nevertheless, I believe the prescriptive motivation—the desire to make our language as good as it can be—is, at its core, a healthy one. It leads us to strive for clear and effective communication. It leads us to seek out good language to use as a model. And it slows language change and helps to ensure that writing will be more understandable to audiences that are removed spatially and temporally. But when you try to turn this into a coherent principle to instruct writers on individual points of usage, like transpire or aggravate or enormity, well, then you start running into trouble, because that approach favors fiat over reason and evidence. But I think that an interest in clear and effective language, tempered with a healthy dose of facts and an acknowledgement that the real truth is often messy, can be a boon to all involved.