Arrant Pedantry

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

Now at Visual Thesaurus

In case you haven’t seen it already, I have a a new post up at Visual Thesaurus. It explores the history of toward and towards and specifically looks at copy editors’ role in driving towards out of use in edited American English. It’s only available to subscribers, but the subscription is only $19.95 a year. You get access to a lot of other great features and articles, including more to come from me.

I’ll keep writing here, of course, and I’ll try to get back to a more regular posting schedule now that my thesis is finished. Stay tuned.

By

Take My Commas—Please

Most editors are probably familiar with the rule that commas should be used to set off nonrestrictive appositives and that no commas should be used around restrictive appositives. (In Chicago 16, it’s under 6.23.) A restrictive appositive specifies which of a group of possible referents you’re talking about, and it’s thus integral to the sentence. A nonrestrictive appositive simply provides extra information about the thing you’re talking about. Thus you would write My wife, Ruth, (because I only have one wife) but My cousin Steve (because I have multiple cousins, and one is named Steve). The first tells you that my wife’s name is Ruth, and the latter tells you which of my cousins I’m talking about.

Most editors are probably also familiar with the claim that if you leave out the commas after a phrase like “my wife”, the implication is that you’re a polygamist. In one of my editing classes, we would take a few minutes at the start of each class to share bloopers with the rest of the class. One time my professor shared the dedication of a book, which read something like “To my wife Cindy”. Obviously the lack of a comma implies that he must be a polygamist! Isn’t that funny? Everyone had a good laugh.

Except me, that is. I was vaguely annoyed by this alleged blooper, which required a willful misreading of the dedication. There was no real ambiguity here—only an imagined one. If the author had actually meant to imply that he was a polygamist, he would have written something like “To my third wife, Cindy”, though of course he could still write this if he were a serial monogamist.

Usually I find this insistence on commas a little exasperating, but in one instance the other day, the commas were actually wrong. A proofreader had corrected a caption which read “his wife Arete” to “his wife, Arete,” which probably seemed like a safe change to make but which was wrong in this instance—the man referred to in the caption had three wives concurrently. I stetted the change, but it got me thinking about fact-checking and the extent to which it’s an editor’s job to split hairs.

This issue came up repeatedly during a project I worked on last year. It was a large book with a great deal of biographical information in it, and I frequently came across phrases like “Hans’s daughter Ingrid”. Did Hans have more than one daughter, or was she his only daughter? Should it be “Hans’s daughter, Ingrid,” or “Hans’s daughter Ingrid”? And how was I to know?

Pretty quickly I realized just how ridiculous the whole endeavor was. I had neither the time nor the resources to look up World War II–era German citizens in a genealogical database, and I wasn’t about to bombard the author with dozens of requests for him to track down the information either. Ultimately, it was all pretty irrelevant. It simply made no difference to the reader. I decided we were safe just leaving the commas out of such constructions.

And, honestly, I think it’s even safer to leave the commas out when referring to one’s spouse. Polygamy is such a rarity in our culture that it’s usually highlighted in the text, with wording such as “John and Janet, one of his three wives”. Assuming that “my wife Ruth” implies that I have more than one wife is a deliberate flouting of the cooperative principle of communication. This insistence on a narrow, prescribed meaning over the obvious, intended meaning is a problem with many prescriptive rules, but, once again, that’s a topic for another day.

Please note, however, that I’m not saying that anything goes or that you can punctuate however you want as long as the meaning’s clear. In cases where it’s a safe assumption that there’s just one possible referent, or when it doesn’t really matter, the commas can sometimes seem a little fussy and superfluous.

By

Grammar and Morality

Lately there’s been an article going around titled “The Real George Zimmerman’s Really Bad Grammar”, by Alexander Nazaryan. I’m a week late getting around to blogging about it, but at the risk of wading into a controversial topic with a possibly tasteless post, I wanted to take a closer look at some of the arguments and analyses made in the article.

The first thing that struck me about the article is the explicit moralization of grammar. At the end of the first paragraph, the author, a former English teacher, says that when he forced students to write notes of apology, he explained to them that “good grammar equaled a clean conscience.” (This guy must’ve been a joy to have as a teacher.)

But then the equivocation begins. Although Nazaryan admits that Zimmerman “has bigger concerns than the independent clause”, he nevertheless insists that some of Zimmerman’s errors “are both glaring and inexcusable”. Evidently, quitting one’s job and going into hiding for one’s own safety is no excuse for any degree of grammatical laxness.

Nazaryan’s grammatical analysis leaves something to be desired, too. He takes a quote from Zimmerman’s website—“The only thing necessary for the triumph of evil, is that good men do nothing”—and says, “Why does Zimmerman insert an absolutely needless comma between subject (granted, a complex one) and verb? I can’t speculate on that, but he seems to have treated ‘is that good men do nothing’ as a nonrestrictive clause that adds extra information to the sentence.” This sort of comma, inserted between a complex subject and its verb, used to be completely standard, but it fell out of use in edited writing in the last century or two. It’s still frequently found in unedited writing, however.

I’m not expecting Nazaryan to know the history of English punctuation conventions, but he should at least recognize that this is a thing that a lot of people do, and it’s not for the reason that he suspects. After all, in what sense could the entire predicate of a sentence be a “nonrestrictive clause that adds extra information”? He’s actually got it backwards, in a sense: it’s the complement clause of the subject—“necessary for the triumph of evil”—that’s being set off, albeit with a single, unpaired comma. (And I can’t resist poking fun at the fact that he says “I can’t speculate on that” and the immediately proceeds to speculate on it.)

Nazaryan does make some valid points—that Zimmerman may be overreaching in his prose at times, using words and constructions he hasn’t really mastered—but the whole exercise makes me uncomfortable. (Yes, I have mixed feelings about writing this post myself.) Picking grammatical nits when one man has been killed and another charged with second-degree murder is distasteful enough; equating good grammar with morality makes me squirm.

This is not to say that there is no value in editing, of course. This recent study found that editing contributes to the readers’ perception of the value and professionalism of a story. I did a small study of my own for a class a few years ago and found the same thing. A good edit improves the professional appearance of a story, which may make readers more likely to trust or believe it. However, this does not mean that readers will necessary see an unedited story as a mark of guilt.

Nazaryan makes his thesis most explicit near the end, when he says, “The more I think about this, the more puzzling it becomes. Zimmerman is accused of being a careless vigilante who played fast and loose with the law; why would he want to give credence to that argument by playing fast and loose with the most basic laws of grammar?” I’m sorry, but who in their right minds—who other than Alexander Nazaryan, that is—believes that petty grammatical violations can be taken as a sign of lawless vigilantism?

But wait—there’s still an out. According to Nazaryan, all Zimmerman needs is a good copyeditor. Of course, the man has quit his job and is begging for donations to pay for his legal defense and living expenses, but I guess that’s irrelevant. Obviously he should’ve gotten his priorities straight and paid for a copyeditor first to obtain grammatical—and thereby moral—absolution.

Nazaryan squeezes in one last point at the end, and it’s maybe even more ridiculous than his identification of clean grammar with a clean conscience: “One of the aims of democracy is that citizens are able to articulate their rights in regard to other citizens and the state itself; when one is unable to do so, there is a sense of collective failure—at least for this former teacher.” You see, bad grammar doesn’t just indicate an unclean conscience; it threatens the very foundations of democracy.

I’m feeling a sense of failure too, but for entirely different reasons than Alexander Nazaryan.

By

More on That

As I said in my last post, I don’t think the distribution of that and which is adequately explained by the restrictive/nonrestrictive distinction. It’s true that nearly all thats are restrictive (with a few rare exceptions), but it’s not true that all restrictive relative pronouns are thats and that all whiches are nonrestrictive, even when you follow the traditional rule. In some cases that is strictly forbidden, and in other cases it is disfavored to varying degrees. Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in your analysis.

In researching the topic for this post, I’ve learned a couple of things: (1) I don’t know syntax as well as I should, and (2) the behavior of relatives in English, particularly that, is far more complex than most editors or pop grammarians realize. First of all, there’s apparently been a century-long argument over whether that is even a relative pronoun or actually some sort of relativizing conjunction or particle. (Some linguists seem to prefer the latter, but I won’t wade too deep into that debate.) Previous studies have looked at multiple factors to explain the variation in relativizers, including the animacy of the referent, the distance between the pronoun and its referent, the semantic role of the relative clause, and the syntactic role of the referent.

It’s often noted that that can’t follow a preposition and that it doesn’t have a genitive form of its own (it must use either whose or of which), but no usage guide I’ve seen ever makes mention of the fact that this pattern follows the accessibility hierarchy. That is, in a cross-linguistic analysis, linguists have found an order to the way in which relative clauses are formed. Some languages can only relativize subjects, others can do subjects and verbal objects, yet others can do subjects, verbal objects, and oblique objects (like the objects of prepositions), and so on. For any allowable position on the hierarchy, all positions to the left are also allowable. The hierarchy goes something like this:

subject ≥ direct object ≥ indirect object ≥ object of stranded preposition ≥ object of fronted preposition ≥ possessor noun phrase ≥ object of comparative particle

What is interesting is that that and the wh- relatives, who and which, occupy overlapping but different portions of the hierarchy. Who and which can relativize anything from subjects to possessors and possibly objects of comparative particles, though whose as the genitive form of which seems a little odd to some, and both sound odd if not outright ungrammatical with comparatives, as in The man than who I’m taller. But that can’t relativize objects of fronted prepositions or anything further down the scale.

Strangely, though, there are things that that can do that who and which can’t. That can sometimes function as a sort of relative adverb, equivalent to the relative adverbs why, where, or when or to which with a preposition. That is, you can say The day that we met, The day when we met, or The day on which we met, but not The day which we met. And which can relativize whole clauses (though some sticklers consider this ungrammatical), while that cannot, as in This author uses restrictive “which,” which bothers me a lot.

So what explains the differences between that and which or who? Well, as I mentioned above, some linguists consider that not a pronoun but a complementizer or conjunction (perhaps a highly pronominal one), making it more akin to the complementizer that, as in He said that relativizers were confusing. And some linguists have also proposed different syntactic structures for restrictive and nonrestrictive clauses, which could account for the limitation of that to restrictive clauses. If that is not a true pronoun but a complementizer, then that could account for its strange distribution. It can’t appear in nonrestrictive clauses, because they require a full pronoun like which or who, and it can’t appear after prepositions, because those constructions similarly require a pronoun. But it can function as a relative adverb, which a regular relative pronoun can’t do.

As I argued in my previous post, it seems that which and that do not occupy separate parts of a single paradigm but are part of two different paradigms that overlap. The differences between them can be characterized in a few different ways, but for some reason, grammarians have seized on the restrictive/nonrestrictive distinction and have written off the rest as idiosyncratic exceptions to the rule or as common errors (when they’ve addressed those points at all).

The proposal to disallow which in restrictive relative clauses, except in the cases where that is ungrammatical—sometimes called Fowler’s rule, though that’s not entirely accurate—is based on the rather trivial observation that all thats are restrictive and that all nonrestrictives are which. It then assumes that the converse is true (or should be) and tries to force all restrictives to be that and all whiches to be nonrestrictive (except for all those pesky exceptions, of course).

Garner calls Fowler’s rule “nothing short of brilliant,”[1] but I must disagree. It’s based on a rather facile analysis followed by some terrible logical leaps. And insisting on following a rule based on bad linguistic analysis is not only not helpful to the reader, it’s a waste of editors’ time. As my last post shows, editors have obviously worked very hard to put the rule into practice, but this is not evidence of its utility, let alone its brilliance. But a linguistic analysis that could account for all of the various differences between the two systems of relativization in English? Now that just might be brilliant.

Sources

Herbert F. W. Stahlke, “Which That,” Language 52, no. 3 (Sept. 1976): 584–610
Johan Van Der Auwera, “Relative That: A Centennial Dispute,” Journal of Lingusitics 21, no. 1 (March 1985): 149–79
Gregory R. Guy and Robert Bayley, “On the Choice of Relative Pronouns in English,” American Speech 70, no. 2 (Summer 1995): 148–62
Nigel Fabb, “The Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 26, no. 1 (March 1990): 57–77
Robert D. Borsley, “More on the Difference between English Restrictive and Nonrestrictive Relative Clauses,” Journal of Linguistics 28, no. 1 (March 1992), 139–48

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”

By

Which Hunting

I meant to blog about this several weeks ago, when the topic came up in my corpus linguistics class from Mark Davies, but I didn’t have time then. And I know the that/which distinction has been done to death, but I thought this was an interesting look at the issue that I hadn’t seen before.

For one of our projects in the corpus class, we were instructed to choose a prescriptive rule and then examine it using corpus data, determining whether the rule was followed in actual usage and whether it varied over time, among genres, or between the American and British dialects. One of my classmates (and former coworkers) chose the that/which rule for her project, and I found the results enlightening.

She searched for the sequences “[noun] that [verb]” and “[noun] which [verb],” which aren’t perfect—they obviously won’t find every relative clause, and they’ll pull in a few non-relatives—but the results serve as a rough measurement of their relative frequencies. What she found is that before about the 1920s, the two were used with nearly equal frequency. That is, the distinction did not exist. After that, though, which takes a dive and that surges. The following chart shows the trends according to Mark Davies’ Corpus of Historical American English and his Google Books N-grams interface.

It’s interesting that although the two corpora show the same trend, Google Books lags a few decades behind. I think this is a result of the different style guides used in different genres. Perhaps style guides in certain genres picked up the rule first, from whence it disseminated to other style guides. And when we break out the genres in COHA, we see that newspapers and magazines lead the plunge, with fiction and nonfiction books following a few decades later, though use of which is apparently in a general decline the entire time. (NB: The data from the first decade or two in COHA often seems wonky; I think the word counts are low enough in those years that strange things can skew the numbers.)

Proportion of "which" by genres

The strange thing about this rule is that so many people not only take it so seriously but slander those who disagree, as I mentioned in this post. Bryan Garner, for instance, solemnly declares—without any evidence at all—that those who don’t follow the rule “probably don’t write very well,” while those who follow it “just might.”[1] (This elicited an enormous eye roll from me.) But Garner later tacitly acknowledges that the rule is an invention—not by the Fowler brothers, as some claim, but by earlier grammarians. If the rule did not exist two hundred years ago and was not consistently enforced until the 1920s or later, how did anyone before that time ever manage to write well?

I do say enforced, because most writers do not consistently follow it. In my research for my thesis, I’ve found that changing “which” to “that” is the single most frequent usage change that copy editors make. If so many writers either don’t know the rule or can’t apply it consistently, it stands to reason that most readers don’t know it either and thus won’t notice the difference. Some editors and grammarians might take this as a challenge to better educate the populace on the alleged usefulness of the rule, but I take it as evidence that it’s just not useful. And anyway, as Stan Carey already noted, it’s the commas that do the real work here, not the relative pronouns. (If you’ve already read his post, you might want to go and check it out again. He’s added some updates and new links to the end.)

And as I noted in my previous post on relatives, we don’t observe a restrictive/nonrestrictive distinction with who(m) or, for that matter, with relative adverbs like where or when, so at the least we can say it’s not a very robust distinction in the language and certainly not necessary for comprehension. As with so many other useful distinctions, its usefulness is taken to be self-evident, but the evidence of its usefulness is less than compelling. It seems more likely that it’s one of those random things that sometimes gets grammaticalized, like gender or evidentiality. (Though it’s not fully grammaticalized, because it’s not obligatory and is not a part of the natural grammar of the language, but is a rule that has to be learned later.)

Even if we just look at that and which, we find a lot of exceptions to the rule. You can’t use that as the object of a preposition, even when it’s restrictive. You can’t use it after a demonstrative that, as in “Is there a clear distinction between that which comes naturally and that which is forced, even when what’s forced looks like the real thing?” (I saw this example in COCA and couldn’t resist.) And Garner even notes “the exceptional which”, which is often used restrictively when the relative clause is somewhat removed from its noun.[2] And furthermore, restrictive which is frequently used in conjoined relative clauses, such as “Eisner still has a huge chunk of stock options—about 8.7 million shares’ worth—that he can’t exercise yet and which still presumably increase in value over the next decade,” to borrow an example from Garner.[3]

Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in its formulation. I’ll explain what I think is going on with that and which in a later post.

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”
  2. [2] S.v. “Remote Relatives. B. The Exceptional which.”
  3. [3] S.v. “which. D. And which; but which..”

By

The Value of Prescriptivism

Last week I asked rather skeptically whether prescriptivism had moral worth. John McIntyre was interested by my question and musing in the last paragraph, and he took up the question (quite admirably, as always) and responded with his own thoughts on prescriptivism. What I see is in his post is neither a coherent principle nor an innately moral argument, as Hart argued, but rather a set of sometimes-contradictory principles mixed with personal taste—and I think that’s okay.

Even Hart’s coherent principle is far from coherent when you break it down. The “clarity, precision, subtlety, nuance, and poetic richness” that he touts are really a bundle of conflicting goals. Clear wording may come at the expense of precision, subtlety, and nuance. Subtlety may not be very clear or precise. And so on. And even if these are all worthy goals, there may be many more that are missing.

McIntyre notes several more goals for practical prescriptivists like editors, including effectiveness, respect for an author’s voice, consistency with a set house style, and consideration of reader reactions, which is a quagmire in its own right. As McIntyre notes, some readers may have fits when they see sentence-disjunct “hopefully”, while other readers may find workarounds like “it is to be hoped that” to be stilted.

Of course, any appeal to the preferences of the reader (which is, in a way, more of a construct than a real entity) still requires decision making: which readers are you appealing to? Many of those who give usage advice seem to defer to the sticklers and pedants, even when it can be shown that they’re pretty clearly wrong or at least holding to outdated and somewhat silly notions. Grammar Girl, for example, guides readers through the arguments for and against “hopefully”, repeatedly saying that she hopes it becomes acceptable someday (note how carefully she avoids using “hopefully” herself, even though she claims to support it) but ultimately shies away from the usage, saying that you should avoid it for now because it’s not acceptable yet. (I’ll write about the strange reasoning presented here some other time.)

But whether or not you give in to the pedants and cranks who write angry letters to lecture you on split infinitives and stranded prepositions, it’s still clear that there’s value in considering the reader’s wishes while writing and editing. The author wants to communicate something to an audience; the audience presumably wants to receive that communication. It’s in both parties’ best interests if that communication goes off without a hitch, which is where prescriptivism can come in.

As McIntyre already said, this doesn’t give you an instant answer to every question, it can give you some methods of gauging roughly how acceptable certain words or constructions are. Ben Yagoda provides his own “somewhat arbitrary metric” for deciding when to fight for a traditional meaning and when to let it go. But the key word here is “arbitrary”; there is no absolute truth in usage, no clear, authoritative source to which you can appeal to solve these questions.

Nevertheless, I believe the prescriptive motivation—the desire to make our language as good as it can be—is, at its core, a healthy one. It leads us to strive for clear and effective communication. It leads us to seek out good language to use as a model. And it slows language change and helps to ensure that writing will be more understandable to audiences that are removed spatially and temporally. But when you try to turn this into a coherent principle to instruct writers on individual points of usage, like transpire or aggravate or enormity, well, then you start running into trouble, because that approach favors fiat over reason and evidence. But I think that an interest in clear and effective language, tempered with a healthy dose of facts and an acknowledgement that the real truth is often messy, can be a boon to all involved.

By

Numbers and Hyphens

Recently I got a letter from my phone company informing me that my area code will be switching to 10-digit dialing sometime next year. Several times the letter mentioned that we will have to start dialing “10-digits.” It was very consistent—every time the numeral 10 was followed by the noun “digits,” there was a hyphen between them.

Now, I’ve tried to mellow over the last few years and take a more descriptivist stance on a lot of things, but I’m still pretty prescriptivist when it comes to spelling and style. Hyphens have a few different purposes, one of which is to join compound modifiers, and that purpose was not being served here.

Unfortunately, this is one of those things that most people aren’t really taught in school anymore, and even a lot of editors struggle with hyphens. It seems that some people see hyphens between numerals and whatever words follow them and generalize this to mean that there should always be hyphens after numerals.

But this isn’t the case, because as I said before, hyphens serve a purpose. The stress patterns and intonation of “10 digit(s)” are different in “You have to dial 10 digits” and “You have to dial 10-digit numbers,” because one is a compound and the other is not. The hyphen helps indicate this in writing, and if there’s a hyphen when there doesn’t need to be one, the reader may be primed to expect another word, thinking that “10-digits” is a compound that modifies something, only to find that that’s the end of the phrase.

Of course, one may argue that in compounds like this, the noun is always singular (“10-digit dialing,” not “10-digits dialing”), thus preventing any ambiguity or misreading. While technically true, some readers—like me—may still experience a slight mental hiccup when they realize that it’s not a compound but simply a numeral modifying a noun.

The solution is to learn when hyphens are actually needed. Of course, not all style guides agree on all points, but any decent style guide will at least cover the basics. And if all else fails, trust your ear—if you’re saying it like a compound, use a hyphen. If you’re saying it like two separate words, don’t use one. And if you’re writing or editing anything for publication, you really should know this already.

By

How I Became a Descriptivist

Believe it or not, I wasn’t always the grammar free-love hippie that I am now. I actually used to be known as quite a grammar nazi. This was back in my early days as an editor (during my first year or two of college) when I was learning lots of rules about grammar and usage and style, but before I had gotten into my major classes in English language, which introduced me to a much more descriptivist approach.

It was a gradual progression, starting with my class in modern American usage. Our textbook was Merriam-Webster’s Dictionary of English Usage, which is a fantastic resource for anyone interested in editing or the English language in general. The class opened my eyes to the complexities of usage issues and made me realize that few issues are as black-and-white as most prescriptivists would have you believe. And this was in a class in the editing minor of all places.

My classes in the English language major did even more to change my opinions about prescriptivism and descriptivism. Classes in Old English and the history of the English language showed me that although the language has changed dramatically over the centuries, it has never fallen into a state of chaos and decay. There has been clear, beautiful, compelling writing in every stage of the language (well, as long as there have been literate Anglo-Saxons, anyway).

But I think the final straw was annoyance with a lot of my fellow editors. Almost none of them seemed interested in doing anything other than following the strictures laid out in style guides and usage manuals (Merriam-Webster’s Dictionary of English Usage was somehow exempt from reference). And far too often, the changes they made did nothing to improve the clarity, readability, or accuracy of the text. Without any depth of knowledge about the issues, they were left without the ability to make informed judgements about what should be changed.

In fact, I would say that you can’t be a truly great editor unless you learn to approach things from a descriptivist perspective. And in the end, you’re still deciding how the text should be instead of simply talking about how it is, so you haven’t fully left prescriptivism behind. But it will be an informed prescriptivism, based on facts about current and historical usage, with a healthy dose of skepticism towards the rhetoric coming from the more fundamentalist prescriptivists.

And best of all, you’ll find that the sky won’t fall and the language won’t rapidly devolve into caveman grunts just because you stopped correcting all the instances of figurative over to more than. Everybody wins.

By

Source Checking

In my current job making day planners, I get to read a lot of quotes. I don’t know who decided that day planners needed cheesy motivational and inspirational quotes in the first place, but that’s just the way it’s done.

One of my tasks is to compile databases of quotes and to make sure everything is accurate. The first part is easy. We’ve got a couple dozen books of quotations in the office, and if for some reason we want a little variety, there are countless sites on the internet that compile all kinds of motivational quotes.

Unfortunately, virtually all of our sources are unreliable. All but a few websites are completely untrustworthy; there are no standards, no editing, and no source citations. Most people seem to think that a vague description of who the person is (“actor,” “business executive,” and so forth) should suffice.

But surely edited and published books would be reliable, right? Not usually. Only one or two of the books in our office have real source citations so that we could track down the original if we wanted. Most just name an author, and sometimes they even screw that up—I’ve seen a quote by Will Durant attributed to Aristotle (it was in a book in which he discussed certain of Aristotle’s ideas) and another quote attributed to Marlene vos Savant. (For those of you who don’t know, it should be Marilyn vos Savant.) I can’t even figure out how an editorial error like that happens. Then there’s a quote from Jonathan Westover that pops up from time to time.

You begin to realize pretty quickly just how low the standards are for this genre of publishing. Most people don’t care about the accuracy of their inspiration—it’s the warm fuzzy feeling that matters. So things like research and thorough copy editing go out the window. It’s probably largely a waste of my time too. I doubt any of our customers would’ve spotted the errors above, but I feel like a fraud if I don’t try to catch as many of them as possible.

I’m beginning to realize that there are probably dozens of apocryphal, misattributed, or otherwise problematic quotes that I’m missing, though, simply because I don’t have the resources to track everything down. Googling for quotes seldom turns up anything of real use. And anyway, I wouldn’t be surprised if most of our books are sourced entirely from the internet or from other unsourced collections of quotations. It might be an interesting study in stemmatics if it weren’t such an inane subject. Though sometimes I wonder if there are real origins for these incorrect quotes or if it’s just bad sources all the way down.

%d bloggers like this: