Arrant Pedantry

By

Prescriptivism and Language Change

Recently, John McIntyre posted a video in which he defended the unetymological use of decimate to the Baltimore Sun’s Facebook page. When he shared it to his own Facebook page, a lively discussion ensued, including this comment:

Putting aside all the straw men, the ad absurdums, the ad hominems and the just plain sillies, answer me two questions:
1. Why are we so determined that decimate, having once changed its meaning to a significant portion of the population, must be used to mean obliterate and must never be allowed to change again?
2. Is your defence of the status quo on the word not at odds with your determination that it is a living language?
3. If the word were to have been invented yesterday, do you really think “destroy” is the best meaning for it?
…three questions!

Putting aside all the straw men in these questions themselves, let’s get at what he’s really asking, which is, “If decimate changed once before from ‘reduce by one-tenth’ to ‘reduce drastically’, why can’t it change again to the better, more etymological meaning?”

I’ve seen variations on this question pop up multiple times over the last few years when traditional rules have been challenged or debunked. It seems that the notions that language changes and that such change is normal have become accepted by many people, but some of those people then turn around and ask, “So if language changes, why can’t change it in the way I want?” For example, some may recognize that the that/which distinction is an invention that’s being forced on the language, but they may believe that this is a good change that increases clarity.

On the surface, this seems like a reasonable question. If language is arbitrary and changeable, why can’t we all just decide to change it in a positive way? After all, this is essentially the rationale behind the movements that advocate bias-free or plain language. But whereas those movements are motivated by social or cognitive science and have measurable benefits, this argument in favor of old prescriptive rules is just a case of motivated reasoning.

The bias-free and plain language movements are based on the premises that people deserve to be treated equally and that language should be accessible to its audience. Arguing that decimated really should mean “reduced by one-tenth” is based on a desire to hang on to rules that one was taught in one’s youth. It’s an entirely post hoc rationale, because it’s only employed to defend bad rules, not to determine the best meaning for or use of every word. For example, if we really thought that narrower etymological senses were always better, shouldn’t we insist that cupboard only be used to refer to a board on which one places cups?

This argument is based in part on a misunderstanding of what the descriptivist/prescriptivist debate is all about. Nobody is insisting that decimate must mean “obliterate”, only observing that it is used in the broader sense far more often than the narrower etymological sense. Likewise, no one is insisting that the word must never be allowed to change again, only noting that it is unlikely that the “destroy one-tenth” sense will ever be the dominant sense. Arguing against a particular prescription is not the same as making the opposite prescription.

But perhaps more importantly, this argument is based on a fundamental misunderstanding of how language change works. As Allan Metcalf said in a recent Lingua Franca post, “It seems a basic principle of language that if an expression is widely used, that must be because it is widely useful. People wouldn’t use a word if they didn’t find it useful.” And as Jan Freeman has said, “we don’t especially need a term that means ‘kill one in 10.’” That is, the “destroy one-tenth” sense is not dominant precisely because it is not useful.

The language changed when people began using the word in a more useful way, or to put it more accurately, people changed the language by using the word in a more useful way. You can try to persuade them to change back by arguing that the narrow meaning is better, but this argument hasn’t gotten much traction in the 250 years since people started complaining about the broader sense. (The broader sense, unsurprisingly, dates back to the mid-1600s, meaning that English speakers were using it for a full two centuries before someone decided to be bothered by it.)

But even if you succeed, all you’ll really accomplish is driving decimate out of use altogether. Just remember that death is also a kind of change.

By

15% Off Plus Free Shipping

I should have posted this sooner, but better late than never. Spreadshirt, the home of the Arrant Pedantry Store, currently has a promotion for 15% off plus free shipping, and it ends tonight. If you’ve been thinking of getting one of the new We Can Even! shirts for that special person in your life for Christmas, now would be the perfect time.

we-can-even

Just use the code 2016OMG at checkout.

By

Whence Did They Come?

In a recent episode of Slate’s Lexicon Valley podcast, John McWhorter discussed the history of English personal pronouns. Why don’t we use ye or thee and thou anymore? What’s the deal with using they as a gender-neutral singular pronoun? And where do they and she come from?

The first half, on the loss of ye and the original second-person singular pronoun thou, is interesting, but the second half, on the origins of she and they, missed the mark, in my opinion.

I recommend listening to the whole thing, but here’s the short version. The pronouns she and they/them/their(s) are new to the language, relatively speaking. This is what the personal pronoun paradigm looked like in Old English:

Case Masculine Neuter Feminine Plural
Nominative hit hēo hīe
Accusative hine hit hīe hīe
Dative him him hire him
Genitive his his hire heora

There was some variation in some forms in different dialects and sometimes even within a single dialect, but this table captures the basic forms. (Note that the vowels here basically have classical values, so would be pronounced somewhat like hey, hire would be something like hee-reh, and so on. A macron or acute accent just indicates that a vowel is longer.)

One thing that’s surprising is how recognizable many of them are. We can easily see he, him, and his in the singular masculine forms (though hine, along with all the other accusative forms, have been lost), it (which has lost its h) in the singular neuter forms, and her in the singular feminine forms. The real oddballs here are the singular feminine form, hēo, and the third-person plural forms. They look nothing like their modern forms.

These changes started when the case system started to disappear at the end of the Old English period. , hēo, and hie began to merge together, which would have led to a lot of confusion. But during the Middle English period (roughly 1100 to 1500 AD), some new pronouns appeared, and then things started settling down into the paradigms we know now: he/him/his, it/it/its, she/her/her, and they/them/their. (Note that the original dative and genitive forms for it were identical to those for he, but it wasn’t until Early Modern English that these were replaced by it and his, respectively.)

The origin of they/them/their is fairly uncontroversial: these were apparently borrowed from Old Norse–speaking settlers, who invaded during the Old English period and captured large parts of eastern and northern England, forming what is known as the Danelaw. These Old Norse speakers gave us quite a lot of words, including anger, bag, eye, get, leg, and sky.

The Old Norse words for they/them/their looked like this:

Case Masculine Neuter Feminine
Nominative þeir þau þær
Accusative þá þau þær
Dative þeim þeim þeim
Genitive þeirra þeirra þeirra

If you look at the masculine column, you’ll notice the similarity to the current they/them/their paradigm. (Note that the letter that looks like a cross between a b and a p is a thorn, which stood for the sounds now represented by th in English.)

Many Norse borrowings lost their final r, and unstressed final vowels began to be dropped in Middle English, which would yield þei/þeim/þeir. (As with the Old English pronouns, the accusative form was lost.) It seems like a pretty straightforward case of borrowing. The English third-person pronouns began to merge together as the result of some regular sound changes, but the influx of Norse speakers provided us an alternative for the plural forms.

But not so fast, McWhorter says. Borrowing nouns, verbs, and the like is pretty common, but borrowing pronouns, especially personal pronouns, is pretty rare. So he proposes an alternative origin for they/them/their: the Old English demonstrative pronouns—that is, words like this and these (though in Old English, the demonstratives functioned as definite articles too). Since hē/hēo/hīe were becoming ambiguous, McWhorter argues, English speakers turned to the next best thing: a set of words meaning essentially “that one” or “those ones”. Here’s what the plural demonstrative pronouns in Old English looked like:

Case Plural
Nominative þā
Accusative þā
Dative þǣm/þām
Genitive þāra/þǣra

(Old English had a common plural form rather than separate plural forms for the masculine, neuter, and feminine genders.)

There’s some basis for this kind of change from a demonstrative to a person pronoun; third-person pronouns in many languages come from demonstratives, and the third-person plural pronouns in Old Norse actually come from demonstratives themselves, which explains why they look similar to the Old English demonstratives: they all start with þ, and the dative and genitive forms have the -m and -r on the end just like them/their and the Old Norse forms do.

But notice that the vowels are different. Instead of ei in the nominative, dative, and genitive forms, we have ā or ǣ. This may not seem like a big deal, but generally speaking, vowel changes don’t just randomly affect a few words at a time; they usually affect every word with that sound. There has to be some way to explain the change from ā to ei/ey.

And to make matters worse, we know that ā (/ɑː/ in the International Phonetic Alphabet) raised to /ɔː/ (the vowel in court or caught if you don’t rhyme it with cot) during Middle English and eventually raised to /oʊ/ (the vowel in coat) during the Great Vowel Shift. In a nutshell, if English speakers had started using þā as the third-person plural pronoun in the nominative case, we’d be saying tho rather than they today.

But the biggest problem is that the historical evidence just doesn’t support the idea that they originates from þā. The first recorded instance of they, according to The Oxford English Dictionary, is in a twelfth-century manuscript known as the Ormulum, written by a monk known only as Orm. Orm is the Old Norse word for worm, serpent, or dragon, and the manuscript is written in an East Midlands dialect, which means that it came from the Danelaw, the area once controlled by Norse speakers.

In the Ormulum we finds forms like þeȝȝ and þeȝȝre for they and their, respectively. (The letter ȝ, known as yogh, could represent a variety of sounds, but in this case it represents /i/ or /j/). Other early forms of they include þei, þai, and thei.

The spread of these new forms was gradual, moving from areas of heaviest Old Norse influence throughout the rest of the English-speaking British Isles. The early-fifteenth-century Hengwert Chaucer, a manuscript of The Canterbury Tales, usually has they as the subject but retains her for genitives (from the Old English plural genitive form hiera or heora) and em for objects (from the Old English plural dative him. The ’em that we use today as a reduced form of them probably traces back to this, making it the last vestige of the original Old English third-person plural pronouns.

So to make a long story short, we have new pronouns that look like Old Norse pronouns that arose in an Old Norse–influenced area and then spread out from there. McWhorter’s argument boils down to “borrowing personal pronouns is rare, so it must not have happened”, and then he ignores or hand-waves away any problems with this theory. The idea that these pronouns instead come from the Old English þā just doesn’t appear to be supported either phonologically or historically.

This isn’t even an area of controversy. When I tweeted about McWhorter’s podcast, Merriam-Webster lexicographer Kory Stamper was surprised, responding, “I…didn’t realize there was an argument about the ety of ‘they’? I mean, all the etymologists I know agree it’s Old Norse.” Borrowing pronouns may be rare, but in this case all the signs point to yes.

For a more controversial etymology, though, you’ll have to wait until a later date, when I wade into the murky etymology of she.

By

Stupidity on Singular They

A few weeks ago, the National Review published a singularly stupid article on singular they. It’s wrong from literally the first sentence, in which the author, Josh Gelernter, says that “this week, the 127-year-old American Dialect Society voted the plural pronoun ‘they,’ used as a singular pronoun, their Word of the Year.” It isn’t from last week; this is a piece of old news that recently went viral again. The American Dialect Society announced its word of the year, as it typically does, at the beginning of the year. Unfortunately, this is a good indication of the quality of the author’s research throughout the rest of the article.

After calling those who use singular they stupid and criticizing the ADS for failing to correct them (which is a fairly serious misunderstanding of the purpose of the ADS and the entire field of linguistics in general), Gelernter says that we already have a gender-neutral third-person pronoun, and it’s he. He cites “the dictionary of record”, Webster’s Second International, for support. His choice of dictionary is telling. For those not familiar with it, Webster’s Second, or W2, was published in 1934 and has been out of print for decades.

The only reason someone would choose it over Webster’s Third, published in 1961, is as a reaction to the perception that W3 was overly permissive. When it was first published, it was widely criticized for its more descriptive stance, which did away with some of the more judgemental usage labels. Even W3 is out of date and has been replaced with the new online Unabridged; W2 is only the dictionary of record of someone who refuses to accept any of the linguistic change or social progress of the last century.

Gelernter notes that W2’s first definition for man is “a member of the human race”, while the “male human being” sense “is the second-given, secondary definition.” Here it would have helped Gelernter to read the front matter of his dictionary. Unlike some other dictionaries, Merriam-Webster arranges entries not in order of primary or central meanings to more peripheral meanings but in order of historical attestation. Man was most likely originally gender-neutral, while the original word for a male human being was wer (which survives only in the word werewolf). Over time, though, wer fell out of use, and man began pulling double duty. 1The Online Etymology Dictionary notes that a similar thing happened with the Latin vir (cognate with wer) and homo. Vir fell out of use as homo took over the sense of “male human”.

So just because an entry is listed first in a Merriam-Webster dictionary does not mean it’s the primary definition, and just because a word originally meant one thing (and still does mean that thing to some extent) does not mean we must continue to use it that way.

Interestingly, Gelernter admits that the language lost some precision when the plural you pushed out the singular thou as a second-person pronoun, though, bizarrely, he says that it was for good reason, because you had caught on as a more polite form of address. The use of you as a singular pronoun started as a way to be polite and evolved into an obsession with social status, in which thou was eventually relegated to inferiors before finally dropping out of use.

The resurgence of singular they in the twentieth century was driven by a different sort of social force: an acknowledgement that the so-called gender-neutral he is not really gender-neutral. Research has shown that gender-neutral uses of he and man cause readers to think primarily of males, even when context makes it clear that the person could be of either gender. (Here’s just one example.) They send the message that men are the default and women are other. Embracing gender-neutral language, whether it’s he or she or they or some other solution, is about correcting that imbalance by acknowledging that women are people too.

And in case you still think that singular they is just some sort of newfangled politically correct usage, you should know that it has been in use since the 1300s and has been used by literary greats from Chaucer to Shakespeare to Orwell.2I once wrote that Orwell didn’t actually use singular they; it turns out that the quote attributed to him in Merriam-Webster’s Dictionary of English Usage was wrong, but he really did use it. For centuries, nobody batted an eye at singular they, until grammarians started to proscribe it in favor of generic he in the eighteenth and nineteenth centuries. Embracing singular they doesn’t break English grammar; it merely embraces something that’s been part of English grammar for seven centuries.

At the end, we get to the real heart of Gelernter’s article: ranting about new gender-neutral job titles in the armed forces. Gelernter seems to think that changing to gender-neutral titles will somehow make the members of our armed forces suddenly forget how to do their jobs. This isn’t really about grammar; it’s about imagining that it’s a burden to think about the ways in which language affects people, that it’s a burden to treat women with the same respect as men.

But ultimately, it doesn’t matter what Josh Gelernter thinks about singular they or about gender-neutral language in general. Society will continue to march on, just as language has continued to march on in the eight decades since his beloved Webster’s Second was published. But remember that we have a choice in deciding how language will march on. We can use our language to reflect outdated and harmful stereotypes, or we can use it to treat others with the respect they deserve. I know which one I choose.

Notes   [ + ]

1. The Online Etymology Dictionary notes that a similar thing happened with the Latin vir (cognate with wer) and homo. Vir fell out of use as homo took over the sense of “male human”.
2. I once wrote that Orwell didn’t actually use singular they; it turns out that the quote attributed to him in Merriam-Webster’s Dictionary of English Usage was wrong, but he really did use it.

By

New Shirt Design: We Can Even!

I’m pleased to announce a new T-shirt design in my shop: We Can Even! It’s a classic design updated for this modern era of being unable to even.

We Can Even!

And through October 25, you can get free shipping on all orders in the Arrant Pedantry Store when you use the coupon code JUST4YOU at checkout.

By

Book Review: What the F

whatthef Disclosure: I received a free advance review copy of this book from the publisher, Basic Books.

I was a little nervous when I was asked to review Benjamin K. Bergen’s new book, What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves. Unlike many of my linguist and editor friends, I’m not much of a swearer. I was raised in a fairly conservative religious household, and I can count the number of times I swore as a child on one hand with some fingers left over. Even now I swear pretty rarely. When someone asked me if I’d like to contribute to the group blog Strong Language (tagline: a sweary blog about swearing), I politely declined simply because I wouldn’t have much to add.

But even for someone with as clean a mouth as me, What the F is a fascinating read. Bergen starts by looking at the different realms swear words come from, like religion, sex, bodily effluvia, and disparaged groups. Most swear words across cultures probably fall into one of these categories, but different categories are weighted differently across cultures. For example, in French-speaking Quebec, some of the most offensive words are religious terms, even though most Quebecois nowadays are not very religious. Japanese, on the other hand, is said to lack dedicated swear words, but it still has ways to express the same ideas.

Bergen then dives into what makes a swear word a swear word, exploring concepts like sound symbolism to see whether there’s something innately sweary about certain words. In English, at least, there are some strong tendencies—our swear words tend to be monosyllabic and end with a consonant, especially consonants lower on the sonority hierarchy, like stops, affricates, and fricatives. That is, a word ending in k sounds swearier than a word ending in m. But this doesn’t necessarily hold across other languages, and it doesn’t offer a complete explanation for why English swear words are what they are. There are certainly other words that fit the pattern but aren’t swears. To a large extent it’s simply arbitrary.

Similarly, gestures like flipping the bird are largely arbitrary too, despite what appears to be some striking iconicity. But rude gestures vary widely, so that a gesture that seems harmless to Americans, like a thumbs-up or an A-OK, can be just as offensive as the bird in other countries. Even swearing in sign language isn’t as symbolic or iconic as you might think; signs for the f-word are quite different in American and British Sign Language, though the connection between signifier and signified is perhaps a little less arbitrary than in spoken language. Swear words are swear words because convention says they are. If you hear people use a certain word a certain way, you figure out pretty quickly what it means.

Some of the most fascinating parts of the book, though, come from what swearing tells us about how the brain works. Most students of linguistics probably know that some stroke victims can still swear fluently even if their other language abilities are severely impaired, which tells us that swearing uses different mental circuitry from regular language—swearing taps into much more primal neural hardware in the basal ganglia. On the flip side, Tourette’s syndrome, which involves dysfunction of the basal ganglia, can cause an overwhelming urge to swear. Some deaf people with Tourette’s feel the same urge, but the swearing comes out via their hands rather than their mouths. And the fact that the brain reacts to prevent us from accidentally saying swear words shows that we have a built-in censor monitoring our speech as it’s produced.

In a later chapter, Bergen debunks a paper by a team from where else but the School of Family Life at my alma mater, Brigham Young University, that purported to show that exposure to swearing actually harms children. Although there’s evidence that slurs can harm children, and verbal abuse in general can be harmful, there’s actually no evidence that exposure to swearing causes children harm. And Bergen ends with a thoughtful chapter titled “The Paradox of Profanity”, which argues that profanity gets much of their power from our attempts to suppress it. The less frequently we hear a swear word, the more shocking it is when we do hear it.

Throughout the book, Bergen maintains a nice balance between academic and approachable. The book is backed up by copious notes, but the writing is engaging and often funny, as when a footnote on the “various other manifestations” of the chicken gesture (“bent elbows moving up and down to depict chicken wings”) led to this Arrested Development clip.

Come for the swears; stay for a fascinating exploration of language and humanity.

What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves is available now at Amazon and other booksellers.

By

To Boldly Split Infinitives

Today is the fiftieth anniversary of the first airing of Star Trek, so I thought it was a good opportunity to talk about split infinitives. (So did Merriam-Webster, which beat me to the punch.) If you’re unfamiliar with split infinitives or have thankfully managed to forget what they are since your high school days, it’s when you put some sort of modifier between the to and the infinitive verb itself—that is, a verb that is not inflected for tense, like be or go—and for many years it was considered verboten.

Kirk’s opening monologue on the show famously featured the split infinitive “to boldly go”, and it’s hard to imagine the phrase working so well without it. “To go boldly” and “boldly to go” both sound terribly clunky, partly because they ruin the rhythm of the phrase. “To BOLDly GO” is a nice iambic bimeter, meaning that it has two metrical feet, each consisting of an unstressed syllable followed by a stressed syllable—duh-DUN duh-DUN. “BOLDly to GO” is a trochee followed by an iamb, meaning that we have a stressed syllable, two unstressed syllables, and then another stressed syllable—DUN-duh duh-DUN. “To GO BOLDly” is the reverse, an iamb followed by a trochee, leading to a stress clash in the middle where the two stresses butt up against each other and then ending on a weaker unstressed syllable. Blech.

But the root of the alleged problem with split infinitives concerns not meter but syntax. The question is where it’s syntactically permissible to put a modifier in a to-infinitive phrase. Normally, an adverb would go just in front of the verb it modifies, as in She boldly goes or He will boldly go. Things were a little different when the verb was an infinitive form preceded by to. In this case the adverb often went in front of the to, not in front of the verb itself.

As Merriam-Webster’s post notes, split infinitives date back at least to the fourteenth century, though they were not as common back then and were often used in different ways than they are today. But they mostly fell out of use in the sixteenth century and then roared back to life in the eighteenth century, only to be condemned by usage commentators in the nineteenth and twentieth centuries. (Incidentally, this illustrates a common pattern of prescriptivist complaints: a new usage arises, or perhaps it has existed for literally millennia, it goes unnoticed for decades or even centuries, someone finally notices it and decides they don’t like it (often because they don’t understand it), and suddenly everyone starts decrying this terrible new thing that’s ruining English.)

It’s not particularly clear, though, why people thought that this particular thing was ruining English. The older boldly to go was replaced by the resurgent to boldly go. It’s often claimed that people objected to split infinitives on the basis of analogy with Latin (Merriam-Webster’s post repeats this claim). In Latin, an infinitive is a single word, like ire, and it can’t be split. Ergo, since you can’t split infinitives in Latin, you shouldn’t be able to split them in English either. The problem with this theory is that there’s no evidence to support it. Here’s the earliest recorded criticism of the split infinitive, according to Wikipedia:

The practice of separating the prefix of the infinitive mode from the verb, by the intervention of an adverb, is not unfrequent among uneducated persons. . . . I am not conscious, that any rule has been heretofore given in relation to this point. . . . The practice, however, of not separating the particle from its verb, is so general and uniform among good authors, and the exceptions are so rare, that the rule which I am about to propose will, I believe, prove to be as accurate as most rules, and may be found beneficial to inexperienced writers. It is this :—The particle, TO, which comes before the verb in the infinitive mode, must not be separated from it by the intervention of an adverb or any other word or phrase; but the adverb should immediately precede the particle, or immediately follow the verb.

No mention of Latin or of the supposed unsplittability of infinitives. In fact, the only real argument is that uneducated people split infinitives, while good authors didn’t. Some modern usage commentators have used this purported Latin origin of the rule as the basis of a straw-man argument: Latin couldn’t split infinitives, but English isn’t Latin, so the rule isn’t valid. Unfortunately, Merriam-Webster’s post does the same thing:

The rule against splitting the infinitive comes, as do many of our more irrational rules, from a desire to more rigidly adhere (or, if you prefer, “to adhere more rigidly”) to the structure of Latin. As in Old English, Latin infinitives are written as single words: there are no split infinitives, because a single word is difficult to split. Some linguistic commenters have pointed out that English isn’t splitting its infinitives, since the word to is not actually a part of the infinitive, but merely an appurtenance of it.

The problem with this argument (aside from the fact that the rule wasn’t based on Latin) is that modern English infinitives—not just Old English infinitives—are only one word too and can’t be split either. The infinitive in to boldly go is just go, and go certainly can’t be split. So this line of argument misses the point: the question isn’t whether the infinitive verb, which is a single word, can be split in half, but whether an adverb can be placed between to and the verb. As Merriam-Webster’s Dictionary of English Usage notes, the term split infinitive is a misnomer, since it’s not really the infinitive but the construction containing an infinitive that’s being split.

But in recent years I’ve seen some people take this terminological argument even further, saying that split infinitives don’t even exist because English infinitives can’t be split. I think this is silly. Of course they exist. It used to be that people would say boldly to go; then they started saying to boldly go instead. It doesn’t matter what you call the phenomenon of moving the adverb so that it’s snug up against the verb—it’s still a phenomenon. As Arnold Zwicky likes to say, “Labels are not definitions.” Just because the name doesn’t accurately describe the phenomenon doesn’t mean it doesn’t exist. We could call this phenomenon Steve, and it wouldn’t change what it is.

At this point, the most noteworthy thing about the split infinitive is that there are still some people who think there’s something wrong with it. The original objection was that it was wrong because uneducated people used it and good writers didn’t, but that hasn’t been true in decades. Most usage commentators have long since given up their objections to it, and some even point out that avoiding a split infinitive can cause awkwardness or even ambiguity. In his book The Sense of Style, Steven Pinker gives the example The board voted immediately to approve the casino. Which word does immediately modify—voted or approve?

But this hasn’t stopped The Economist from maintaining its opposition to split infinitives. Its style guide says, “Happy the man who has never been told that it is wrong to split an infinitive: the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it.”

I call BS on this. Most usage commentators have moved on, and I suspect that most laypeople either don’t know or don’t care what a split infinitive is. I don’t think I know a single copy editor who’s bothered by them. If you’ve been worrying about splitting infinitives since your high school English teacher beat the fear of them into you, it’s time to let it go. If they’re good enough for Star Trek, they’re good enough for you too.

But just for fun, let’s do a little poll:

Do you find split infinitives annoying?

View Results

Loading ... Loading ...

By

Book Review: The Subversive Copy Editor

Disclosure: I received a free copy of this book from the University of Chicago Press.

subversiveI have a terrible editor confession:1You can choose to read that either as a terrible confession for an editor or as the confession of a terrible editor. until now, I had not read Carol Fisher Saller’s book The Subversive Copy Editor. I also have to take back what I said about But Can I Start a Sentence with “But”?this is the best book on editing I’ve ever read.

The book, now in its second edition, has been revised and expanded with new chapters. In the introduction, Saller explains just what she means by “subversive”—rather than sneaking errors into print to sabotage the writer, she aims to subvert the stereotype of the editor locked in an eternal struggle with the writer or so bound by pointless rules that they can’t see the forest of the copy for the trees of supposed errors.

I find Saller’s views on editing absolutely refreshing. I’ve never been a fan of the idea that editors and authors are mortal enemies locked in an eternal struggle. Authors want to share their ideas, and readers, we hope, want to read them; editors help facilitate the exchange. Shouldn’t we all be on the same side?

Saller starts with a few important reminders—copy editors aren’t the boss, and the copy doesn’t belong to us—before diving into some practical advice on how to establish good author-editor relations. It all starts with an introductory phone call or email, which is the editor’s chance to establish their carefulness, transparency, and flexibility. If you show the author from the beginning that you’re on their side, the project should get off to a good start.

And to maintain good relations throughout a project, it’s important to keep showing that you’re careful, transparent, and flexible. Don’t bombard the author with too many queries about things that they don’t know or care about like arbitrary points of style. Just make a decision, explain it succinctly if you feel the need, and move on. And don’t lecture or condescend in your queries either. Saller recommends reading through all of your queries again once you get to the end of a project, because sometimes you read a query you wrote days ago and realize you unintentionally come across as a bit of a jerk.

Too many editors mechanically apply a style without stopping to ask themselves whether they’re making the manuscript better or merely making it different. Sometimes a manuscript won’t perfectly conform to Chicago or whatever style you may be using, but that can be okay as long as it’s consistent and not wrong. (If you’re editing for an academic journal or other publication with a rigid style, of course, that’s a different story.) But there’s no reason to spend hours and hours changing an entire book manuscript from one arbitrary but valid style to another equally arbitrary but valid style. Not only have you wasted time and probably irritated the author, but there’s a good chance that you’ve missed something, introduced errors, or both. Rather than “What’s the rule?” Saller suggests asking, “What is helpful?” or “What makes sense?”

And Saller doesn’t have much patience for editors who get “hung up on phantom issues and personal bugaboos,” who feel compelled to “ferret out every last which and change it to that2I saw this happen once on a proofread. Remarkably, I don’t think the author used a single relative that in the entire book. The proofreader hunted down every last restrictive which and changed it to that—and missed a lot of real errors in the process. And changing that many whiches to thats surely would have wreaked havoc with the copyfitting.—if you’re still relying on your high school English teacher’s lectures on grammar, you need to get with the times. Get some good (current!) reference books. Learn to look things up online.

I also appreciated the advice on how to manage difficult projects. When faced with a seemingly insurmountable task, Saller recommends a few simple steps: automate, delegate, reevaluate, and accept your fate. See if you can find a macro or other software tool to save you from having to grind through long, repetitive tasks. Delegate things to an intern if possible. (Sorry, interns!) Ask yourself whether you really need to do what you think needs to be done. And if all else fails, simply knuckle down and get through it.

There’s also a chapter to help writers navigate the copyediting process, along with chapters on learning to use your word processor better, managing deadlines, working as a freelancer, and more. And throughout it all Saller provides sensible, practical advice. Some of my favorite bits come from a chapter called “The Zen of Copyediting,” which aims to help editors let go of the things that don’t really matter. When faced with an apathetic author, one of Saller’s colleagues tells herself, “You can’t care about the book more than the author.” Saller herself dares to suggest that “some of our ‘standards’ are just time-consuming habits that don’t really make a difference to the reader.” And finally, one of Saller’s former mentors liked to say, “Remember—it’s only a book.”

Whether you’re a seasoned editor or a novice just breaking into the field, The Subversive Copy Editor provides sage advice on just about every aspect of the job. It should be a part of every editor’s library.

The Subversive Copy Editor is available now at Amazon and other booksellers.

Notes   [ + ]

1. You can choose to read that either as a terrible confession for an editor or as the confession of a terrible editor.
2. I saw this happen once on a proofread. Remarkably, I don’t think the author used a single relative that in the entire book. The proofreader hunted down every last restrictive which and changed it to that—and missed a lot of real errors in the process. And changing that many whiches to thats surely would have wreaked havoc with the copyfitting.

By

Whoa There

Recently, the freelance writer and film critic Eric Snider tweeted this:

A few days later, a friend linked to this discussion thread on Goodreads started by sci-fi/fantasy author Lois McMaster Bujold. In it, Bujold asked readers to help her with what she called “distributed proofreading” (I’ll just note in passing that the idea of crowdsourcing your proofreading makes my skin crawl), and one reader helpfully pointed out that Bujold had misspelled “whoa” as “woah”. Bujold responded that whoa and woah mean different things, so it was not a misspelling: “‘Whoa!’ is a command meaning ‘Stop!’ ‘Woah!’ is an exclamation of astonishment, rendered phonetically. The original meaning stands.”

I’ve never read anything by Bujold, so I have no idea whether she was being a little tongue in cheek or whether she was simply mistaken. Woah doesn’t appear in Merriam-Webster’s Collegiate Dictionary, The American Heritage Dictionary, or The Random House Dictionary. According to all of these, there is just one word, whoa, that can be used as a command (often to a horse) to stop or as an exclamation of surprise.

The Oxford English Dictionary, of course, paints a more complicated picture. Whoa dates to the 1800s and is a variant of an earlier who (pronounced the same as whoa, not the same as the interrogative pronoun who), which dates to the 1400s. Who, in turn, is a variant of an earlier ho, which was borrowed from Old French in the 1300s. Some of the spellings recorded in the OED for these three related words are whoa, whow, whoo, whoe, hoo, hoe, and hoe. A search for woah leads to the entry woa, which is listed as a variant of whoa, with the forms woa and woah. Dinosaur Comics author Ryan North seems to prefer the hybrid form “whoah”:

whoah

But despite this variation, a search for whoa and woah in the Google Books Ngrams Viewer shows that whoa has been the overwhelmingly more popular form for at least the last two hundred years.

But a search in the BYU GloWbe Corpus, which includes unedited material from the web, shows that whoa occurs at a rate of 2.02 per million in blogs and woah at a rate of .8 per million—not neck and neck, but much closer than we see in the edited material in Google Books. This means that an awful lot of people misspell whoa, and those misspellings are generally edited out of published writing. (Though Bujold’s books are apparently an exception; maybe she talked a copyeditor into letting her keep woah.)

It seems obvious where people are getting the woah spelling: yeah is spelled very similarly, with a semivowel, two vowels, and a silent h. And if you’re like many Americans and don’t distinguish between wh and w—that is, you pronounce which and witch identically—then it’s not obvious where the h goes.

But as Eric Snider noted, many people don’t seem to know how to spell yeah either. The OED says that yeah is a casual pronunciation of yes that originated in the US around 1900. The entry for yeh says much the same thing: “colloq. or dial. var. of yes n.1 or yea v. ” The earliest citation dates to 1920. The entry for yah, interestingly, says that it’s a representation of German or Dutch speech (in both of these languages, the word for “yes” is ja), and the earliest citation dates to 1863. A citation from the London Daily News in 1905 reads, “America..has two substitutes for ‘yes.’ One of them is ‘yep’ and the other is ‘yah.’” I have to wonder if German and Dutch influenced the rise and spread of yeah in American English.

But regardless of its ultimate origin, yeah arose in speech, and so it’s no surprise that people came up with different ways to spell this new word. Still, most people have settled on yeah in edited writing, even though yah and ya are common in unedited writing. I even had a friend who used yeay, and I was never quite sure if this was supposed to be pronounced like yeah or yay or somewhere in between the two. (Interestingly, yay, which arose as a variant of yea, is not found in Merriam-Webster’s Collegiate, though it is in American Heritage and the OED. And surprisingly, it dates to only 1963.)

I don’t expect the situation to change anytime soon. The more unedited writing people read, the more forms like woah and yah will look normal. Editors may continue to correct them in published writing when we get the chance, but people will go on merrily spelling them any way they please.

neowhoa

By

Book Review: But Can I Start a Sentence with “But”?

chicagoq&a

Disclosure: I received a free copy of this book from the University of Chicago Press.

I have to admit that I was a little skeptical when I heard that the University of Chicago Press was putting out a collection of questions and answers from the popular Chicago Style Q&A. What’s the point of having it in book form when the online Q&A is freely available and easily searchable? And yet I have to admit that this charming little gift book is one of the best books on editing I’ve ever read.

If you’re not familiar with the Chicago Style Q&A, it’s a place where anyone can submit a question to the staff in the manuscript editing department at the University of Chicago Press. Selected questions and answers are then posted monthly. I don’t read the Q&A regularly, but when you search Chicago’s website, answers from the Q&A appear in the results. It’s a great repository of answers to questions that aren’t necessarily covered in the manual itself.

Because the book is simply a compilation of questions and answers, the organization is necessarily somewhat loose, though the books editors have grouped them into topics such as Possessives and Attributes, How Do You Cite . . . ?, and, one of my favorites, Things That Freak Us Out. If you’re not familiar with the Chicago Style Q&A, you may not know that the editors have developed a bit of a snarky voice. Maybe it’s a result of staring of pages and pages of text all day or of dealing with recalcitrant authors. Or maybe the editors have just been asked one too many times about something that could have been found in the manual if the person asking had just looked. Whatever the reason for it, it makes reading the answers a lot of fun.

For example, when someone asked if an abbreviation with periods should then be followed by another period if it appears at the end of the sentence, they respond, “Seriously, have you ever seen two periods in a row like that in print? If we told you to put two periods, would you do it? Would you set your hair on fire if CMOS said you should?” Or when someone asks innocently enough, “Can I use the first person?”, they answer, “Evidently.” And when someone asks why it’s so hard to find things in the manual, they write, “It must just be one of those things. If only there were a search box, or an index . . .” And when a US Marine threatened to deploy a detail of marines to invade Chicago’s offices and impose the outdated two-spaces-after-a-sentence rule, they reply, “As a US Marine, you’re probably an expert at something, but I’m afraid it’s not this.” The editors at Chicago clearly suffer no fools.

But in between the bits of dry wit and singeing snark are some truly thoughtful remarks on the craft of editing. For instance, when someone says that they don’t think it’s helpful to write out “graphics interchange format” in full the first time when referring to GIFs, the editors simply respond, “You never have to do anything that isn’t helpful. If a style guide says you do, you need a better style guide.” Or when someone asks if you always need commas after introductory phrases like “in the summer of 1812”, they answer, “Rejoice: everyone is correct. Higher authorities are not interested in legislating commas to this degree. Peace.”

Even at a thousand pages or more, The Chicago Manual of Style can’t provide answers to everything, nor should it. Editing that relies on a list of black-and-white edicts tends to be mechanical and to miss the forest of the text for the trees of commas and hyphens. If you want to be a good editor, you have to learn how to use your head. As the editors say, “Make your choice with a view to minimizing inconsistencies, and record them in your style sheet.” There’s not always one right answer. Sometimes you just have to pick one and stick with it.

But perhaps my favorite answer is the last one in the book:

Q. My library shelves are full. I need to make some difficult decisions to make space for new arrivals. Is there any reason to keep my CMOS 14th and 15th editions?
A. What a question. If you had more children, would you give away your firstborn? Find a board and build another shelf.

Here’s my bookcase of editing and language books at home. Obviously it’s time for me to build another shelf.

IMG_20160712_154518

But Can I Start a Sentence with “But”? Advice from the Chicago Style Q&A is available now. You can buy it from Amazon or your favorite bookseller.

%d bloggers like this: