Arrant Pedantry

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

Names, Spelling, and Style

A couple of weeks ago, I had a conversation with Mededitor on Twitter about name spelling and style. It started with a tweet from Grammar Girl linking to an old post of hers on whether you need a comma before “Jr.” She notes that most style guides now leave out the commas. Mededitor opined that the owners of the names, not editors, should get to decide whether or not to use commas. In this follow-up post, Grammar Girl seems to come to the same conclusion:

However, Chicago also states that writers should make a reasonable effort to spell a name the way a person spells it himself or herself, and I presume that also applies to punctuation. In other words, you’re free to insist on the comma before “Jr.” in your own name.

I can see the appeal in this argument, but I have to disagree. As I argued on Twitter and in a comment on that second post, catering to authors’ preferences for commas around “Jr.” creates inconsistency in the text. And it wouldn’t just be authors themselves that we’d have to cater to; what about people mentioned or cited in the text? Should editors spend time tracking down every Jr. or III whose names appear in writing to ask whether they prefer to have their suffixes set off with commas?

Doing so could take enormous amounts of time, and in the end there’s no benefit to the reader (and possibly a detriment in the form of distracting inconsistency), only to some authors’ egos. Further, we’d have to create a style anyway and apply it to all those who had no preference or whose preferences could not be identified. Why pick an arbitrary style for some names and not others? Either the preference matters or it doesn’t. And if it doesn’t matter, that’s what a style choice is for: to save us from wasting our time making countless minor decisions.

But I have a further reason for not wishing to defer to authors’ preferences. As I argued in that same comment, punctuation is not the same thing as spelling. There’s one right way to spell my name: Jonathon Owen. If you write my name Jonathan Owens, you’ve spelled it wrong. There’s no principled reason for spelling it one way or another; that’s just the way it is. But punctuation marks aren’t really part of someone’s name; they’re merely stylistic elements between or around the parts of people’s names to separate them, abbreviate them, or join them.

Punctuation around or in names, however, is often principled, though the principles of punctuation are prone to change over time. “Jr.” was traditionally set off by commas not because the commas were officially part of anyone’s name, but because it was considered parenthetic. As punctuation has become more streamlined, the requirement to set off this particular parenthetic with commas has been dropped by most style guides. And to be blunt, I think the desire of some authors to hang on to the commas is driven mostly by a desire to stick with whatever style they grew up with. It’s not much different from some people’s resistance to switching to one space between sentences.

In the course of the conversation with Mededitor, another point came up: periods after middle initials that don’t stand for anything. Some people insist that you shouldn’t use a period in those cases, because the period signals that the letter is an abbreviation, but The Chicago Manual of Style recommends using a period in all cases regardless. Again, it’s difficult for editors and proofreaders to check and enforce proper punctuation after an initial, and the result is a style that looks inconsistent to the readers. And again, individuals’ preferences are not always clear. Even one of the most famous individuals with only a middle initial, Harry S. Truman, wrote his name inconsistently, as the Harry S. Truman Library points out.

Yes, it’s true that editors can add a list of names to their style sheets to save some time, but checking every single name with an initial against a style sheet—and then looking them up if they’re not on the sheet—still takes time. And what’s the result? Names that occasionally look like they’re simply missing a period after the initial, because the reader will generally have no idea that there’s a reason behind the omission. The result is an error in most readers’ eyes, except for those few in the know.

The fundamental problem with making exceptions to general rules is that readers often has no idea that there are principled reasons behind the exceptions. If they see an apparent inconsistency and can’t quickly figure out a reason for it, then they’ve been needlessly distracted. Does the supposed good done by catering to some individuals’ preference for commas or periods around their names outweigh the harm done by presenting readers the appearance of sloppiness?

I don’t think it does, and this is why I agree with Chicago. I think it’s best—both for editors and for readers—to pick a rule and stick with it.

Update: Mededitor posted a response here, and I want to respond and clarify some points I made here. In that post he says, “I argue for the traditional rule, namely: ‘Make a reasonable attempt to accommodate the conventions by which people spell their own names.’” I want to make it clear that I’m also arguing for the traditional rule. I’m not saying that editors should not worry about the spelling of names. I simply disagree that commas and periods should be considered spelling.

With the exception of apostrophes and hyphens, punctuation is a matter of style, not spelling. The comma in Salt Lake City, Utah is not part of the spelling of the place name; it simply separates the two elements of the name, just as the now-deprecated comma before “Jr.” separates it from the given and family names. Note that the commas disappear if you use one element by itself, and other commas can appear in other contexts, such as when a name is inverted: “Jonathon Owen” becomes “Owen, Jonathon” in an index. This comma is also not part of the spelling of my name; it’s just a piece of punctuation. It’s a style choice.

And those style choices vary and change over time. In the UK, it’s standard practice to omit periods from abbreviations. Thus I’d be Jonathon R Owen in British style. The period in American style is not an element of my middle name that appears when it’s shortened—it’s a style choice that communicates something about my name. But the important thing is that it’s a choice. You can’t choose how to spell my name (though plenty of people have told me that I spell it wrong). But you can choose how to punctuate it to fit a given style.

By

Take My Commas—Please

Most editors are probably familiar with the rule that commas should be used to set off nonrestrictive appositives and that no commas should be used around restrictive appositives. (In Chicago 16, it’s under 6.23.) A restrictive appositive specifies which of a group of possible referents you’re talking about, and it’s thus integral to the sentence. A nonrestrictive appositive simply provides extra information about the thing you’re talking about. Thus you would write My wife, Ruth, (because I only have one wife) but My cousin Steve (because I have multiple cousins, and one is named Steve). The first tells you that my wife’s name is Ruth, and the latter tells you which of my cousins I’m talking about.

Most editors are probably also familiar with the claim that if you leave out the commas after a phrase like “my wife”, the implication is that you’re a polygamist. In one of my editing classes, we would take a few minutes at the start of each class to share bloopers with the rest of the class. One time my professor shared the dedication of a book, which read something like “To my wife Cindy”. Obviously the lack of a comma implies that he must be a polygamist! Isn’t that funny? Everyone had a good laugh.

Except me, that is. I was vaguely annoyed by this alleged blooper, which required a willful misreading of the dedication. There was no real ambiguity here—only an imagined one. If the author had actually meant to imply that he was a polygamist, he would have written something like “To my third wife, Cindy”, though of course he could still write this if he were a serial monogamist.

Usually I find this insistence on commas a little exasperating, but in one instance the other day, the commas were actually wrong. A proofreader had corrected a caption which read “his wife Arete” to “his wife, Arete,” which probably seemed like a safe change to make but which was wrong in this instance—the man referred to in the caption had three wives concurrently. I stetted the change, but it got me thinking about fact-checking and the extent to which it’s an editor’s job to split hairs.

This issue came up repeatedly during a project I worked on last year. It was a large book with a great deal of biographical information in it, and I frequently came across phrases like “Hans’s daughter Ingrid”. Did Hans have more than one daughter, or was she his only daughter? Should it be “Hans’s daughter, Ingrid,” or “Hans’s daughter Ingrid”? And how was I to know?

Pretty quickly I realized just how ridiculous the whole endeavor was. I had neither the time nor the resources to look up World War II–era German citizens in a genealogical database, and I wasn’t about to bombard the author with dozens of requests for him to track down the information either. Ultimately, it was all pretty irrelevant. It simply made no difference to the reader. I decided we were safe just leaving the commas out of such constructions.

And, honestly, I think it’s even safer to leave the commas out when referring to one’s spouse. Polygamy is such a rarity in our culture that it’s usually highlighted in the text, with wording such as “John and Janet, one of his three wives”. Assuming that “my wife Ruth” implies that I have more than one wife is a deliberate flouting of the cooperative principle of communication. This insistence on a narrow, prescribed meaning over the obvious, intended meaning is a problem with many prescriptive rules, but, once again, that’s a topic for another day.

Please note, however, that I’m not saying that anything goes or that you can punctuate however you want as long as the meaning’s clear. In cases where it’s a safe assumption that there’s just one possible referent, or when it doesn’t really matter, the commas can sometimes seem a little fussy and superfluous.

By

Grammar and Morality

Lately there’s been an article going around titled “The Real George Zimmerman’s Really Bad Grammar”, by Alexander Nazaryan. I’m a week late getting around to blogging about it, but at the risk of wading into a controversial topic with a possibly tasteless post, I wanted to take a closer look at some of the arguments and analyses made in the article.

The first thing that struck me about the article is the explicit moralization of grammar. At the end of the first paragraph, the author, a former English teacher, says that when he forced students to write notes of apology, he explained to them that “good grammar equaled a clean conscience.” (This guy must’ve been a joy to have as a teacher.)

But then the equivocation begins. Although Nazaryan admits that Zimmerman “has bigger concerns than the independent clause”, he nevertheless insists that some of Zimmerman’s errors “are both glaring and inexcusable”. Evidently, quitting one’s job and going into hiding for one’s own safety is no excuse for any degree of grammatical laxness.

Nazaryan’s grammatical analysis leaves something to be desired, too. He takes a quote from Zimmerman’s website—“The only thing necessary for the triumph of evil, is that good men do nothing”—and says, “Why does Zimmerman insert an absolutely needless comma between subject (granted, a complex one) and verb? I can’t speculate on that, but he seems to have treated ‘is that good men do nothing’ as a nonrestrictive clause that adds extra information to the sentence.” This sort of comma, inserted between a complex subject and its verb, used to be completely standard, but it fell out of use in edited writing in the last century or two. It’s still frequently found in unedited writing, however.

I’m not expecting Nazaryan to know the history of English punctuation conventions, but he should at least recognize that this is a thing that a lot of people do, and it’s not for the reason that he suspects. After all, in what sense could the entire predicate of a sentence be a “nonrestrictive clause that adds extra information”? He’s actually got it backwards, in a sense: it’s the complement clause of the subject—“necessary for the triumph of evil”—that’s being set off, albeit with a single, unpaired comma. (And I can’t resist poking fun at the fact that he says “I can’t speculate on that” and the immediately proceeds to speculate on it.)

Nazaryan does make some valid points—that Zimmerman may be overreaching in his prose at times, using words and constructions he hasn’t really mastered—but the whole exercise makes me uncomfortable. (Yes, I have mixed feelings about writing this post myself.) Picking grammatical nits when one man has been killed and another charged with second-degree murder is distasteful enough; equating good grammar with morality makes me squirm.

This is not to say that there is no value in editing, of course. This recent study found that editing contributes to the readers’ perception of the value and professionalism of a story. I did a small study of my own for a class a few years ago and found the same thing. A good edit improves the professional appearance of a story, which may make readers more likely to trust or believe it. However, this does not mean that readers will necessary see an unedited story as a mark of guilt.

Nazaryan makes his thesis most explicit near the end, when he says, “The more I think about this, the more puzzling it becomes. Zimmerman is accused of being a careless vigilante who played fast and loose with the law; why would he want to give credence to that argument by playing fast and loose with the most basic laws of grammar?” I’m sorry, but who in their right minds—who other than Alexander Nazaryan, that is—believes that petty grammatical violations can be taken as a sign of lawless vigilantism?

But wait—there’s still an out. According to Nazaryan, all Zimmerman needs is a good copyeditor. Of course, the man has quit his job and is begging for donations to pay for his legal defense and living expenses, but I guess that’s irrelevant. Obviously he should’ve gotten his priorities straight and paid for a copyeditor first to obtain grammatical—and thereby moral—absolution.

Nazaryan squeezes in one last point at the end, and it’s maybe even more ridiculous than his identification of clean grammar with a clean conscience: “One of the aims of democracy is that citizens are able to articulate their rights in regard to other citizens and the state itself; when one is unable to do so, there is a sense of collective failure—at least for this former teacher.” You see, bad grammar doesn’t just indicate an unclean conscience; it threatens the very foundations of democracy.

I’m feeling a sense of failure too, but for entirely different reasons than Alexander Nazaryan.

By

Which Hunting

I meant to blog about this several weeks ago, when the topic came up in my corpus linguistics class from Mark Davies, but I didn’t have time then. And I know the that/which distinction has been done to death, but I thought this was an interesting look at the issue that I hadn’t seen before.

For one of our projects in the corpus class, we were instructed to choose a prescriptive rule and then examine it using corpus data, determining whether the rule was followed in actual usage and whether it varied over time, among genres, or between the American and British dialects. One of my classmates (and former coworkers) chose the that/which rule for her project, and I found the results enlightening.

She searched for the sequences “[noun] that [verb]” and “[noun] which [verb],” which aren’t perfect—they obviously won’t find every relative clause, and they’ll pull in a few non-relatives—but the results serve as a rough measurement of their relative frequencies. What she found is that before about the 1920s, the two were used with nearly equal frequency. That is, the distinction did not exist. After that, though, which takes a dive and that surges. The following chart shows the trends according to Mark Davies’ Corpus of Historical American English and his Google Books N-grams interface.

It’s interesting that although the two corpora show the same trend, Google Books lags a few decades behind. I think this is a result of the different style guides used in different genres. Perhaps style guides in certain genres picked up the rule first, from whence it disseminated to other style guides. And when we break out the genres in COHA, we see that newspapers and magazines lead the plunge, with fiction and nonfiction books following a few decades later, though use of which is apparently in a general decline the entire time. (NB: The data from the first decade or two in COHA often seems wonky; I think the word counts are low enough in those years that strange things can skew the numbers.)

Proportion of "which" by genres

The strange thing about this rule is that so many people not only take it so seriously but slander those who disagree, as I mentioned in this post. Bryan Garner, for instance, solemnly declares—without any evidence at all—that those who don’t follow the rule “probably don’t write very well,” while those who follow it “just might.”[1] (This elicited an enormous eye roll from me.) But Garner later tacitly acknowledges that the rule is an invention—not by the Fowler brothers, as some claim, but by earlier grammarians. If the rule did not exist two hundred years ago and was not consistently enforced until the 1920s or later, how did anyone before that time ever manage to write well?

I do say enforced, because most writers do not consistently follow it. In my research for my thesis, I’ve found that changing “which” to “that” is the single most frequent usage change that copy editors make. If so many writers either don’t know the rule or can’t apply it consistently, it stands to reason that most readers don’t know it either and thus won’t notice the difference. Some editors and grammarians might take this as a challenge to better educate the populace on the alleged usefulness of the rule, but I take it as evidence that it’s just not useful. And anyway, as Stan Carey already noted, it’s the commas that do the real work here, not the relative pronouns. (If you’ve already read his post, you might want to go and check it out again. He’s added some updates and new links to the end.)

And as I noted in my previous post on relatives, we don’t observe a restrictive/nonrestrictive distinction with who(m) or, for that matter, with relative adverbs like where or when, so at the least we can say it’s not a very robust distinction in the language and certainly not necessary for comprehension. As with so many other useful distinctions, its usefulness is taken to be self-evident, but the evidence of its usefulness is less than compelling. It seems more likely that it’s one of those random things that sometimes gets grammaticalized, like gender or evidentiality. (Though it’s not fully grammaticalized, because it’s not obligatory and is not a part of the natural grammar of the language, but is a rule that has to be learned later.)

Even if we just look at that and which, we find a lot of exceptions to the rule. You can’t use that as the object of a preposition, even when it’s restrictive. You can’t use it after a demonstrative that, as in “Is there a clear distinction between that which comes naturally and that which is forced, even when what’s forced looks like the real thing?” (I saw this example in COCA and couldn’t resist.) And Garner even notes “the exceptional which”, which is often used restrictively when the relative clause is somewhat removed from its noun.[2] And furthermore, restrictive which is frequently used in conjoined relative clauses, such as “Eisner still has a huge chunk of stock options—about 8.7 million shares’ worth—that he can’t exercise yet and which still presumably increase in value over the next decade,” to borrow an example from Garner.[3]

Something that linguistics has taught me is that when your rule is riddled with exceptions and wrinkles, it’s usually sign that you’ve missed something important in its formulation. I’ll explain what I think is going on with that and which in a later post.

  1. [1] Garner’s Modern American Usage, 3rd ed., s.v. “that. A. And which.”
  2. [2] S.v. “Remote Relatives. B. The Exceptional which.”
  3. [3] S.v. “which. D. And which; but which..”

By

Not Surprising, This Sounds Awkward

The other day at work I came across a strange construction: an author had used “not surprising” as a sentence adverb, as in “Not surprising, the data show that. . . .” I assumed it was simply an error, so I changed it to “not surprisingly” and went on. But then I saw the same construction again. And again. And then I saw a similar construction (“Quite possible, yada yada yada”) within a quotation within the article, at which point I really started to feel weirded out.

I checked the source of the quote, and it turned out that it was actually a grammatically normal “Quite possibly” that the author of the article I was editing had accidentally changed (or intentionally fixed?). My suspicion was that the author was extending the pseudo-rule against the sentence adverb more importantly and was thus avoiding sentence adverbs more generally.

This particular article is for inclusion in a sociology book, so I thought that perhaps there was a broader rule against sentence adverbs in the APA style guide. I didn’t find any such rule there, but I did find something interesting when I did a search on the string “. Not surprising,” in the Corpus of Contemporary American English and found sixteen relevant hits. All the hits appeared to occur in social science or journalistic works, ranging from the New York Times to PBS New Hour to the journal Military History. A similar search for the string “. Not surprisingly,” returned over 1200 hits. (I did not bother to sort through these to determine their relevancy.)

I’m not quite sure what’s going on here. As I said above, the only explanation I can come up with is that someone has extended the rule against more importantly or perhaps other sentence adverbs like hopefully that don’t modify anything in the sentence. Not that the sentence adjective version modifies anything either, of course, but that’s a different issue.

If anyone has any alternative explanation for or justification of this construction, I’d be interested to hear it. It still strikes me as a rather awkward bit of English.

By

Logography

This is a subject I’ve wanted to write about for quite some time, but the recent movie WALL-E has reminded me of the issue once again, and that is this: some people seem to think that logos are the ultimate guide to the orthography of some names.

Now, Bill Walsh has already covered this topic on his site, the Slot, but it’s worth covering again. I’ve seen a couple different websites that pointed out that WALL-E is either spelled with or “promoted with” an interpunct, and I got involved in a forum discussion where people were wondering whether the dot should be rendered as a hyphen or an asterisk (once again, someone explained that it’s an interpunct).

Something about this strikes me as silly. Did I miss the memo when it was announced that graphic designers are the arbiters of proper orthography? And why is it that some people kowtow to certain logos and not others? After all, as Bill Walsh points out, nobody insists that the proper spelling of Macy’s is actually macy*s, so why do we worry about whether it’s WALL-E or WALL*E or WALL·E? (Then again, I see Wal*Mart plenty often. Perhaps there’s some research grant money to be had in studying the sociolinguistics of brand name orthography.)

A while back, I thought this issue mostly cropped up with tech companies (particularly internet companies, like Yahoo and eBay), but then I started seeing the aforementioned Wal*Mart as well as car names like SATURN (remind me again what that stands for) and Mazda6 (now we have to match the italics too? What next, colors and fonts?) I don’t know if this is just an example of the recency illusion, but it does seem like a lot of people nowadays don’t really know how to properly represent brand names.

And anyway, getting back to WALL-E, how do we even know that that’s an interpunct? The Wikipedia article doesn’t cite a source for this fact, and it’s not easy to tell from the logo whether it’s an interpunct, a bullet point, or just a dot. When a novelty font uses a decorative punctuation mark, it might be impossible to say what character that mark is supposed to correspond to. It might not correspond to anything at all, as with the stars in Macy’s and Wal-Mart. As Walsh notes, the five-sided star used in those logos is not the same thing as an asterisk.

I really see no good reason to forsake good judgement and slavishly copy the styling of logos, especially since it’s not always possible to do so. After all, the purpose of a logo is to be eye-catching and recognizable, not to conform to the principles of good spelling, punctuation, and capitalization. I say let logos be logos and text be text. It’s the job of editors to use common sense and good judgement in helping text to conform to reasonable standards. It’s not our job to mindlessly reproduce what we see.

By

Numbers and Hyphens

Recently I got a letter from my phone company informing me that my area code will be switching to 10-digit dialing sometime next year. Several times the letter mentioned that we will have to start dialing “10-digits.” It was very consistent—every time the numeral 10 was followed by the noun “digits,” there was a hyphen between them.

Now, I’ve tried to mellow over the last few years and take a more descriptivist stance on a lot of things, but I’m still pretty prescriptivist when it comes to spelling and style. Hyphens have a few different purposes, one of which is to join compound modifiers, and that purpose was not being served here.

Unfortunately, this is one of those things that most people aren’t really taught in school anymore, and even a lot of editors struggle with hyphens. It seems that some people see hyphens between numerals and whatever words follow them and generalize this to mean that there should always be hyphens after numerals.

But this isn’t the case, because as I said before, hyphens serve a purpose. The stress patterns and intonation of “10 digit(s)” are different in “You have to dial 10 digits” and “You have to dial 10-digit numbers,” because one is a compound and the other is not. The hyphen helps indicate this in writing, and if there’s a hyphen when there doesn’t need to be one, the reader may be primed to expect another word, thinking that “10-digits” is a compound that modifies something, only to find that that’s the end of the phrase.

Of course, one may argue that in compounds like this, the noun is always singular (“10-digit dialing,” not “10-digits dialing”), thus preventing any ambiguity or misreading. While technically true, some readers—like me—may still experience a slight mental hiccup when they realize that it’s not a compound but simply a numeral modifying a noun.

The solution is to learn when hyphens are actually needed. Of course, not all style guides agree on all points, but any decent style guide will at least cover the basics. And if all else fails, trust your ear—if you’re saying it like a compound, use a hyphen. If you’re saying it like two separate words, don’t use one. And if you’re writing or editing anything for publication, you really should know this already.

By

How I Became a Descriptivist

Believe it or not, I wasn’t always the grammar free-love hippie that I am now. I actually used to be known as quite a grammar nazi. This was back in my early days as an editor (during my first year or two of college) when I was learning lots of rules about grammar and usage and style, but before I had gotten into my major classes in English language, which introduced me to a much more descriptivist approach.

It was a gradual progression, starting with my class in modern American usage. Our textbook was Merriam-Webster’s Dictionary of English Usage, which is a fantastic resource for anyone interested in editing or the English language in general. The class opened my eyes to the complexities of usage issues and made me realize that few issues are as black-and-white as most prescriptivists would have you believe. And this was in a class in the editing minor of all places.

My classes in the English language major did even more to change my opinions about prescriptivism and descriptivism. Classes in Old English and the history of the English language showed me that although the language has changed dramatically over the centuries, it has never fallen into a state of chaos and decay. There has been clear, beautiful, compelling writing in every stage of the language (well, as long as there have been literate Anglo-Saxons, anyway).

But I think the final straw was annoyance with a lot of my fellow editors. Almost none of them seemed interested in doing anything other than following the strictures laid out in style guides and usage manuals (Merriam-Webster’s Dictionary of English Usage was somehow exempt from reference). And far too often, the changes they made did nothing to improve the clarity, readability, or accuracy of the text. Without any depth of knowledge about the issues, they were left without the ability to make informed judgements about what should be changed.

In fact, I would say that you can’t be a truly great editor unless you learn to approach things from a descriptivist perspective. And in the end, you’re still deciding how the text should be instead of simply talking about how it is, so you haven’t fully left prescriptivism behind. But it will be an informed prescriptivism, based on facts about current and historical usage, with a healthy dose of skepticism towards the rhetoric coming from the more fundamentalist prescriptivists.

And best of all, you’ll find that the sky won’t fall and the language won’t rapidly devolve into caveman grunts just because you stopped correcting all the instances of figurative over to more than. Everybody wins.

By

Source Checking

In my current job making day planners, I get to read a lot of quotes. I don’t know who decided that day planners needed cheesy motivational and inspirational quotes in the first place, but that’s just the way it’s done.

One of my tasks is to compile databases of quotes and to make sure everything is accurate. The first part is easy. We’ve got a couple dozen books of quotations in the office, and if for some reason we want a little variety, there are countless sites on the internet that compile all kinds of motivational quotes.

Unfortunately, virtually all of our sources are unreliable. All but a few websites are completely untrustworthy; there are no standards, no editing, and no source citations. Most people seem to think that a vague description of who the person is (“actor,” “business executive,” and so forth) should suffice.

But surely edited and published books would be reliable, right? Not usually. Only one or two of the books in our office have real source citations so that we could track down the original if we wanted. Most just name an author, and sometimes they even screw that up—I’ve seen a quote by Will Durant attributed to Aristotle (it was in a book in which he discussed certain of Aristotle’s ideas) and another quote attributed to Marlene vos Savant. (For those of you who don’t know, it should be Marilyn vos Savant.) I can’t even figure out how an editorial error like that happens. Then there’s a quote from Jonathan Westover that pops up from time to time.

You begin to realize pretty quickly just how low the standards are for this genre of publishing. Most people don’t care about the accuracy of their inspiration—it’s the warm fuzzy feeling that matters. So things like research and thorough copy editing go out the window. It’s probably largely a waste of my time too. I doubt any of our customers would’ve spotted the errors above, but I feel like a fraud if I don’t try to catch as many of them as possible.

I’m beginning to realize that there are probably dozens of apocryphal, misattributed, or otherwise problematic quotes that I’m missing, though, simply because I don’t have the resources to track everything down. Googling for quotes seldom turns up anything of real use. And anyway, I wouldn’t be surprised if most of our books are sourced entirely from the internet or from other unsourced collections of quotations. It might be an interesting study in stemmatics if it weren’t such an inane subject. Though sometimes I wonder if there are real origins for these incorrect quotes or if it’s just bad sources all the way down.

%d bloggers like this: