Arrant Pedantry

By

Book Review: Word by Word

Word by Word: The Secret Life of Dictionaries, by Kory Stamper

Disclosure: I received a free advance review copy of this book from the publisher, Pantheon Books. I also consider Kory Stamper a friend.

A lot of work goes into making a book, from the initial writing and development to editing, copyediting, design and layout, proofreading, and printing. Orders of magnitude more work go into making a dictionary, yet few of us give much thought to how dictionaries actually come into being. Most people probably don’t think about the fact that there are multiple dictionaries. We always refer to it as the dictionary, as if it were a monolithic entity.

In Word by Word, Merriam-Webster editor Kory Stamper shows us the inner workings of dictionary making, from gathering citations to defining to writing pronunciations to researching etymologies. In doing so, she also takes us through the history of lexicography and the history of the English language itself.

If you’ve read other popular books on lexicography, like The Lexicographer’s Dilemma by Jack Lynch, you’re probably already familiar with some of the broad outlines of Word by Word—where dictionaries come from, how words get in them, and so on. But Stamper presents even familiar ideas in a fresh way and with wit and charm. If you’re familiar with her blog, Harmless Drudgery, you know she’s a gifted writer. (And if you’re not familiar with it, you should remedy that as soon as possible.)

In discussing the influence of French and Latin on English, for example, she writes, “Blending grammatical systems from two languages on different branches of the Indo-European language tree is a bit like mixing orange juice and milk: you can do it, but it’s going to be nasty.” And in describing the ability of lexicographers to focus on the same dry task day in and day out, she says that “project timelines in lexicography are traditionally so long that they could reasonably be measured in geologic epochs.”

Stamper also deftly teaches us about lexicography by taking us through her own experience of learning the craft, from the job interview in which she gushed about medieval Icelandic family sagas to the day-to-day grind of sifting through citations to the much more politically fraught side of dictionary writing, like changing the definitions for marriage or nude (one of the senses was defined as the color of white skin).

But the real joy of Stamper’s book isn’t the romp through the history of lexicography or the English language or even the self-deprecating jokes about lexicographers’ antisocial ways. It’s the way in which Stamper make stories about words into stories about us.

In one chapter, she looks into the mind of peevers by examining the impulse to fix English and explaining why so many of the rules we cherish are wrong:

The fact is that many of the things that are presented to us as rules are really just the of-the-moment preferences of people who have had the opportunity to get their opinions published and whose opinions end up being reinforced and repeated down the ages as Truth.

Real language is messy, and it doesn’t fit neatly into the categories of right and wrong that we’re taught. Learning this “is a betrayal”, she says, but it’s one that lexicographers have to get over if they’re going to write good dictionaries.

In the chapter “Irregardless”, she explores some of the social factors that shape our speech—race and ethnicity, geography, social class—to explain how she became one of the world’s foremost irregardless apologists when she started answering emails from correspondents who want the word removed from the dictionary. Though she initially shared her correspondents’ hatred of the word, an objective look at its use helped her appreciate it in all its nuanced, nonstandard glory. But—just like anyone else—she still has her own hangups and peeves, like when her teenage daughter started saying “I’m done my homework.”

In another chapter, she relates how she discovered that the word bitch had no stylistic label warning dictionary users that the word is vulgar of offensive, and she dives not only into the word’s history but also into modern efforts to reclaim the slur and the effects the word can have on those who hear it—anger, shame, embarrassment—even when it’s not directed at them.

And in my favorite chapter, she takes a look at the arcane art of etymology. “If logophiles want to be lexicographers when they grow up,” she writes, “then lexicographers want to be etymologists.” (I’ve always wanted to be an etymologist, but I don’t know nearly enough dead languages. Plus, there are basically zero job openings for etymologists.) Stamper relates the time when she brought some Finnish candy into the office, and Merriam-Webster’s etymologist asked her—in Finnish—if she spoke Finnish. She said—also in Finnish—that she spoke a little and asked if he did too. He replied—again, in Finnish—that he didn’t speak Finnish. This is the sort of logophilia that I can only dream of.

Stamper explodes some common etymological myths—no, posh and golf and the f word don’t originate from acronyms—before turning a critical eye on Noah Webster himself. The man may have been the founder of American lexicography, but his etymologies were crap. Webster was motivated by the belief that all languages descend from Hebrew, and so he tried to connect every word to a Hebrew root. But tracing a word’s history requires poring over old documents (often in one of those aforementioned dead languages) and painstakingly following it through the twists and turns of sound changes and semantic shifts.

Stamper ends the book with some thoughts on the present state and future of lexicography. The internet has enabled dictionaries to expand far beyond the limitations of print books—you no longer have to worry about things line breaks or page counts—but it also pushes lexicographers to work faster even as it completely upends the business side of things.

It’s not clear what the future holds for lexicography, but I’m glad that Kory Stamper has given us a peek behind the curtain. Word by Word is a heartfelt, funny, and ultimately human look at where words come from, how they’re defined, and what they say about us.

Word by Word: The Secret Life of Dictionaries is available now at Amazon and other booksellers.

By

Whence Did They Come?

In a recent episode of Slate’s Lexicon Valley podcast, John McWhorter discussed the history of English personal pronouns. Why don’t we use ye or thee and thou anymore? What’s the deal with using they as a gender-neutral singular pronoun? And where do they and she come from?

The first half, on the loss of ye and the original second-person singular pronoun thou, is interesting, but the second half, on the origins of she and they, missed the mark, in my opinion.

I recommend listening to the whole thing, but here’s the short version. The pronouns she and they/them/their(s) are new to the language, relatively speaking. This is what the personal pronoun paradigm looked like in Old English:

Case Masculine Neuter Feminine Plural
Nominative hit hēo hīe
Accusative hine hit hīe hīe
Dative him him hire him
Genitive his his hire heora

There was some variation in some forms in different dialects and sometimes even within a single dialect, but this table captures the basic forms. (Note that the vowels here basically have classical values, so would be pronounced somewhat like hey, hire would be something like hee-reh, and so on. A macron or acute accent just indicates that a vowel is longer.)

One thing that’s surprising is how recognizable many of them are. We can easily see he, him, and his in the singular masculine forms (though hine, along with all the other accusative forms, have been lost), it (which has lost its h) in the singular neuter forms, and her in the singular feminine forms. The real oddballs here are the singular feminine form, hēo, and the third-person plural forms. They look nothing like their modern forms.

These changes started when the case system started to disappear at the end of the Old English period. , hēo, and hie began to merge together, which would have led to a lot of confusion. But during the Middle English period (roughly 1100 to 1500 AD), some new pronouns appeared, and then things started settling down into the paradigms we know now: he/him/his, it/it/its, she/her/her, and they/them/their. (Note that the original dative and genitive forms for it were identical to those for he, but it wasn’t until Early Modern English that these were replaced by it and his, respectively.)

The origin of they/them/their is fairly uncontroversial: these were apparently borrowed from Old Norse–speaking settlers, who invaded during the Old English period and captured large parts of eastern and northern England, forming what is known as the Danelaw. These Old Norse speakers gave us quite a lot of words, including anger, bag, eye, get, leg, and sky.

The Old Norse words for they/them/their looked like this:

Case Masculine Neuter Feminine
Nominative þeir þau þær
Accusative þá þau þær
Dative þeim þeim þeim
Genitive þeirra þeirra þeirra

If you look at the masculine column, you’ll notice the similarity to the current they/them/their paradigm. (Note that the letter that looks like a cross between a b and a p is a thorn, which stood for the sounds now represented by th in English.)

Many Norse borrowings lost their final r, and unstressed final vowels began to be dropped in Middle English, which would yield þei/þeim/þeir. (As with the Old English pronouns, the accusative form was lost.) It seems like a pretty straightforward case of borrowing. The English third-person pronouns began to merge together as the result of some regular sound changes, but the influx of Norse speakers provided us an alternative for the plural forms.

But not so fast, McWhorter says. Borrowing nouns, verbs, and the like is pretty common, but borrowing pronouns, especially personal pronouns, is pretty rare. So he proposes an alternative origin for they/them/their: the Old English demonstrative pronouns—that is, words like this and these (though in Old English, the demonstratives functioned as definite articles too). Since hē/hēo/hīe were becoming ambiguous, McWhorter argues, English speakers turned to the next best thing: a set of words meaning essentially “that one” or “those ones”. Here’s what the plural demonstrative pronouns in Old English looked like:

Case Plural
Nominative þā
Accusative þā
Dative þǣm/þām
Genitive þāra/þǣra

(Old English had a common plural form rather than separate plural forms for the masculine, neuter, and feminine genders.)

There’s some basis for this kind of change from a demonstrative to a person pronoun; third-person pronouns in many languages come from demonstratives, and the third-person plural pronouns in Old Norse actually come from demonstratives themselves, which explains why they look similar to the Old English demonstratives: they all start with þ, and the dative and genitive forms have the -m and -r on the end just like them/their and the Old Norse forms do.

But notice that the vowels are different. Instead of ei in the nominative, dative, and genitive forms, we have ā or ǣ. This may not seem like a big deal, but generally speaking, vowel changes don’t just randomly affect a few words at a time; they usually affect every word with that sound. There has to be some way to explain the change from ā to ei/ey.

And to make matters worse, we know that ā (/ɑː/ in the International Phonetic Alphabet) raised to /ɔː/ (the vowel in court or caught if you don’t rhyme it with cot) during Middle English and eventually raised to /oʊ/ (the vowel in coat) during the Great Vowel Shift. In a nutshell, if English speakers had started using þā as the third-person plural pronoun in the nominative case, we’d be saying tho rather than they today.

But the biggest problem is that the historical evidence just doesn’t support the idea that they originates from þā. The first recorded instance of they, according to The Oxford English Dictionary, is in a twelfth-century manuscript known as the Ormulum, written by a monk known only as Orm. Orm is the Old Norse word for worm, serpent, or dragon, and the manuscript is written in an East Midlands dialect, which means that it came from the Danelaw, the area once controlled by Norse speakers.

In the Ormulum we finds forms like þeȝȝ and þeȝȝre for they and their, respectively. (The letter ȝ, known as yogh, could represent a variety of sounds, but in this case it represents /i/ or /j/). Other early forms of they include þei, þai, and thei.

The spread of these new forms was gradual, moving from areas of heaviest Old Norse influence throughout the rest of the English-speaking British Isles. The early-fifteenth-century Hengwert Chaucer, a manuscript of The Canterbury Tales, usually has they as the subject but retains her for genitives (from the Old English plural genitive form hiera or heora) and em for objects (from the Old English plural dative him. The ’em that we use today as a reduced form of them probably traces back to this, making it the last vestige of the original Old English third-person plural pronouns.

So to make a long story short, we have new pronouns that look like Old Norse pronouns that arose in an Old Norse–influenced area and then spread out from there. McWhorter’s argument boils down to “borrowing personal pronouns is rare, so it must not have happened”, and then he ignores or hand-waves away any problems with this theory. The idea that these pronouns instead come from the Old English þā just doesn’t appear to be supported either phonologically or historically.

This isn’t even an area of controversy. When I tweeted about McWhorter’s podcast, Merriam-Webster lexicographer Kory Stamper was surprised, responding, “I…didn’t realize there was an argument about the ety of ‘they’? I mean, all the etymologists I know agree it’s Old Norse.” Borrowing pronouns may be rare, but in this case all the signs point to yes.

For a more controversial etymology, though, you’ll have to wait until a later date, when I wade into the murky etymology of she.

By

The Drunk Australian Accent Theory

Last week a story started making the rounds claiming that the Australian accent is the result of an “alcoholic slur” from heavy-drinking early settlers. Here’s the story from the Telegraph, which is where I first saw it. The story has already been debunked by David Crystal and others, but it’s still going strong.

The story was first published in the Age by Dean Frenkel, a lecturer in public speaking and communications at Victoria University. Frenkel says that the early settlers frequently got drunk together, and their drunken slur began to be passed down to the rising generations.

Frenkel also says that “the average Australian speaks to just two thirds capacity—with one third of our articulator muscles always sedentary as if lying on the couch”. As evidence, he lists these features of Australian English phonology: “Missing consonants can include missing ‘t’s (Impordant), ‘l’s (Austraya) and ‘s’s (yesh), while many of our vowels are lazily transformed into other vowels, especially ‘a’s to ‘e’s (stending) and ‘i’s (New South Wyles) and ‘i’s to ‘oi’s (noight).”

The first sentence makes it sound as if Frenkel has done extensive phonetic studies on Australians—after all, how else would you know what a person’s articulator muscles are doing?—but the claim is pretty far-fetched. One-third of the average Australian’s articulator muscles are always sedentary? Wouldn’t they be completely atrophied if they were always unused? That sounds less like an epidemic of laziness and more like a national health crisis. But the second sentence makes it clear that Frenkel doesn’t have the first clue when it comes to phonetics and phonology.

There’s no missing consonant in impordant—the [t] sound has simply been transformed into an alveolar flap, [r], which also happens in some parts of the US. This is a process of lenition, in which sounds become more vowel-like, but it doesn’t necessarily correspond to laziness or lax articulator muscles. Austraya does have a missing consonant—or rather, it has a liquid consonant, [l], that has been transformed into the semivowel [j]. This is also an example of lenition, but, again, lenition doesn’t necessarily have anything to do with the force of articulation. Yesh (I presume for yes) involves a slight change in the placement of the tip of the tongue—it moves slightly further back towards the palate—but nothing to do with the force of articulation.

The vowel changes have even less to do with laziness. As David Crystal notes in his debunking, the raising of [æ] to [ε] in standing actually requires more muscular energy to produce, not less. I assume that lowering the diphthong [eɪ] to [æɪ] in Wales would thus take a little bit less energy, but the raising and rounding of [aɪ] to [ɔɪ] would require a little more. In other words, there is no clear pattern of laziness or laxness. Frenkel simply assumes that there’s a standard for which Australians should be aiming and that anything that misses that standard is evidence of laziness, regardless of the actual effort expended.

Even if it were a matter of laziness, the claim that one-third of the articular muscles are always sedentary is absolutely preposterous. There’s no evidence that Frenkel has done any kind of research on the subject; this is just a number pulled from thin air based on his uninformed perceptions of Australian phonetics.

And, again, even if his claims about Australian vocal laxness were true, his claims about the origin of this supposed laxness are still pretty tough to swallow. The early settlers passed on a drunken slur to their children? For that to be even remotely possible, every adult in Australian would have had to be drunk literally all the time, including new mothers. If that were true, Australia would be facing a raging epidemic of fetal alcohol syndrome, not sedentary speech muscles.

As far as I know, there is absolutely zero evidence that Australian settlers were ever that drunk, that constant drunkenness can have an effect on children who aren’t drinking, or that the Australian accent has anything in common with inebriated speech.

When pressed, Frenkel attempts to weasel out of his claims, saying, “I am telling you, it is a theory.” But in his original article, he never claimed that it was a theory; he simply asserted it as fact. And strictly speaking, it isn’t even a theory—at best it’s a hypothesis, because he has clearly done no research to substantiate or verify it.

But all this ridiculousness is just a setup for his real argument, which is that Australians need more training in rhetoric. He says,

If we all received communication training, Australia would become a cleverer country. When rhetoric is presented effectively, it enables content to be communicated in a listener-friendly environment, with well-chosen words spoken at a listenable rate and with balanced volume, fluency, clarity and understandability.

Communication training could certainly be a good thing, but again, there’s a problem—this isn’t rhetoric. Rhetoric is the art of discourse and argumentation; what Frenkel is describing is more like diction or elocution. He’s deploying bad logic and terrible linguistics in service of a completely muddled argument, which is that Australians need to learn to communicate better.

In the end, what really burns me about this story isn’t that Frenkel is peddling a load of tripe but that journalists are so eager to gobble it up. Their ignorance of linguistics is disappointing, but their utter credulousness is completely dismaying. And if that weren’t bad enough, in an effort to present a balanced take on the story, journalists are still giving him credence even when literally every linguist who has commented on it has said that it’s complete garbage.

Huffington Post ran the story with the subhead “It’s a highly controversial theory among other academics”. (They also originally called Frenkel a linguist, but this has been corrected.) But calling Frenkel’s hypothesis “a highly controversial theory among other academics” is like saying that alchemy is a highly controversial theory among chemists or that the flat-earth model is a highly controversial theory among geologists. This isn’t a real controversy, at least not in any meaningful way; it’s one uninformed guy spouting off nonsense and a lot other people calling him on it.

In the end, I think it was Merriam-Webster’s Kory Stamper who had the best response:

aussieaccent

By

Language, Logic, and Correctness

In “Why Descriptivists Are Usage Liberals”, I said that there some logical problems with declaring something to be right or wrong based on evidence. A while back I explored this problem in a piece titled “What Makes It Right?” over on Visual Thesaurus.

The terms prescriptive and descriptive were borrowed from philosophy, where they are used to talk about ethics, and the tension between these two approaches is reflected in language debates today. The questions we have today about correct usage are essentially the same questions philosophers have been debating since the days of Socrates and Plato: what is right, and how do we know?

As I said on Visual Thesaurus, all attempts to answer these questions run into a fundamental logical problem: just because something is doesn’t mean it ought to be. Most people are uncomfortable with the idea of moral relativism and believe at some level that there must be some kind of objective truth. Unfortunately, it’s not entirely clear just where we find this truth or how objective it really is, but we at least operate under the convenient assumption that it exists.

But things get even murkier when we try to apply this same assumption to language. While we may feel safe saying that murder is wrong and would still be wrong even if a significant portion of the population committed murder, we can’t safely make similar arguments about language. Consider the word bird. In Old English, the form of English spoken from about 500 AD to about 1100 AD, the word was brid. Bird began as a dialectal variant that spread and eventually supplanted brid as the standard form by about 1600. Have we all been saying this word wrong for the last four hundred years or so? Is saying bird just as wrong as saying nuclear as nucular?

No, of course not. Even if it had been considered an error once upon a time, it’s not an error anymore. Its widespread use in Standard English has made it standard, while brid would now be considered an error (if someone were to actually use it). There is no objectively correct form of the word that exists independent of its use. That is, there is no platonic form of the language, no linguistic Good to which a grammarian-king can look for guidance in guarding the city.

This is why linguistics is at its core an empirical endeavor. Linguists concern themselves with investigating linguistic facts, not with making value judgements about what should be considered correct or incorrect. As I’ve said before, there are no first principles from which we can determine what’s right and wrong. Take, for example, the argument that you should use the nominative form of pronouns after a copula verb. Thus you should say It is I rather than It is me. But this argument assumes as prior the premise that copula verbs work this way and then deduces that anything that doesn’t work this way is wrong. Where would such a putative rule come from, and how do we know it’s valid?

Linguists often try to highlight the problems with such assumptions by pointing out, for example, that French requires an object pronoun after the copula (in French you say c’est moi [it’s me], not c’est je [it’s I]) or that English speakers, including renowned writers, have long used object forms in this position. That is, there is no reason to suppose that this rule has to exist, because there are clear counterexamples. But then, as I said before, some linguists leave the realm of strict logic and argue that if everyone says it’s me, then it must be correct.

Some people then counter by calling this argument fallacious, and strictly speaking, it is. Mededitor has called this the Jane Austen fallacy (if Jane Austen or some other notable past writer has done it, then it must be okay), and one commenter named Kevin S. has made similar arguments in the comments on Kory Stamper’s blog, Harmless Drudgery.

There, Kevin S. attacked Ms. Stamper for noting that using lay in place of lie dates at least to the days of Chaucer, that it is very common, and that it “hasn’t managed to destroy civilization yet.” These are all objective facts, yet Kevin S. must have assumed that Ms. Stamper was arguing that if it’s old and common, it must be correct. In fact, she acknowledged that it is nonstandard and didn’t try to argue that it wasn’t or shouldn’t be. But Kevin S. pointed out a few fallacies in the argument that he assumed that Ms. Stamper was making: an appeal to authority (if Chaucer did it, it must be okay), the “OED fallacy” (if it has been used that way in the past, it must be correct), and the naturalistic fallacy, which is deriving an ought from an is (lay for lie is common; therefore it ought to be acceptable).

And as much as I hate to say it, technically, Kevin S. is right. Even though he was responding to an argument that hadn’t been made, linguists and lexicographers do frequently make such arguments, and they are in fact fallacies. (I’m sure I’ve made such arguments myself.) Technically, any argument that something should be considered correct or incorrect isn’t a logical argument but a persuasive one. Again, this goes back to the basic difference between descriptivism and prescriptivism. We can make statements about the way English appears to work, but making statements about the way English should work or the way we think people should feel about it is another matter.

It’s not really clear what Kevin S.’s point was, though, because he seemed to be most bothered by Ms. Stamper’s supposed support of some sort of flabby linguistic relativism. But his own implied argument collapses in a heap of fallacies itself. Just as we can’t necessarily call something correct just because it occurred in history or because it’s widespread, we can’t necessarily call something incorrect just because someone invented a rule saying so.

I could invent a rule saying that you shouldn’t ever use the word sofa because we already have the perfectly good word couch, but you would probably roll your eyes and say that’s stupid because there’s nothing wrong with the word sofa. Yet we give heed to a whole bunch of similarly arbitrary rules invented two or three hundred years ago. Why? Technically, they’re no more valid or logically sound than my rule.

So if there really is such a thing as correctness in language, and if any argument about what should be considered correct or incorrect is technically a logical fallacy, then how can we arrive at any sort of understanding of, let alone agreement on, what’s correct?

This fundamental inability to argue logically about language is a serious problem, and it’s one that nobody has managed to solve or, in my opinion, ever will completely solve. This is why the war of the scriptivists rages on with no end in sight. We see the logical fallacies in our opponents’ arguments and the flawed assumptions underlying them, but we don’t acknowledge—or sometimes even see—the problems with our own. Even if we did, what could we do about them?

My best attempt at an answer is that both sides simply have to learn from each other. Language is a democracy, true, but, just like the American government, it is not a pure democracy. Some people—including editors, writers, English teachers, and usage commentators—have a disproportionate amount of influence. Their opinions carry more weight because people care what they think.

This may be inherently elitist, but it is not necessarily a bad thing. We naturally trust the opinions of those who know the most about a subject. If your car won’t start, you take it to a mechanic. If your tooth hurts, you go to the dentist. If your writing has problems, you ask an editor.

Granted, using lay for lie is not bad in the same sense that a dead starter motor or an abscessed tooth is bad: it’s a problem only in the sense that some judge it to be wrong. Using lay for lie is perfectly comprehensible, and it doesn’t violate some basic rule of English grammar such as word order. Furthermore, it won’t destroy the language. Just as we have pairs like lay and lie or sit and set, we used to have two words for hang, but nobody claims that we’ve lost a valuable distinction here by having one word for both transitive and intransitive uses.

Prescriptivists want you to know that people will judge you for your words (and—let’s be honest—usually they’re the ones doing the judging), and descriptivists want you to soften those judgements or even negate them by injecting them with a healthy dose of facts. That is, there are two potential fixes for the problem of using words or constructions that will cause people to judge you: stop using that word or construction, or get people to stop judging you and others for that use.

In reality, we all use both approaches, and, more importantly, we need both approaches. Even most dyed-in-the-wool prescriptivists will tell you that the rule banning split infinitives is bogus, and even most liberal descriptivists will acknowledge that if you want to be taken seriously, you need to use Standard English and avoid major errors. Problems occur when you take a completely one-sided approach, insisting either that something is an error even if almost everyone does it or that something isn’t an error even though almost everyone rejects it. In other words, good usage advice has to consider not only the facts of usage but speakers’ opinions about usage.

For instance, you can recognize that irregardless is a word, and you can even argue that there’s nothing technically wrong with it because nobody cares that the verbs bone and debone mean the same thing, but it would be irresponsible not to mention that the word is widely considered an error in educated speech and writing. Remember that words and constructions are not inherently correct or incorrect and that mere use does not necessarily make something correct; correctness is a judgement made by speakers of the language. This means that, paradoxically, something can be in widespread use even among educated speakers and can still be considered an error.

This also means that on some disputed items, there may never be anything approaching consensus. While the facts of usage may be indisputable, opinions may still be divided. Thus it’s not always easy or even possible to label something as simply correct or incorrect. Even if language is a democracy, there is no simple majority rule, no up and down vote to determine whether something is correct. Something may be only marginally acceptable or correct only in certain situations or according to certain people.

But as in a democracy, it is important for people to be informed before metaphorically casting their vote. Bryan Garner argues in his Modern American Usage that what people want in language advice is authority, and he’s certainly willing to give it to you. But I think what people really need is information. For example, you can state authoritatively that regardless of past or present usage, singular they is a grammatical error and always will be, but this is really an argument, not a statement of fact. And like all arguments, it should be supported with evidence. An argument based solely or primarily on one author’s opinion—or even on many people’s opinions—will always be a weaker argument than one that considers both facts and opinion.

This doesn’t mean that you have to accept every usage that’s supported by evidence, nor does it mean that all evidence is created equal. We’re all human, we all still have opinions, and sometimes those opinions are in defiance of facts. For example, between you and I may be common even in educated speech, but I will probably never accept it, let alone like it. But I should not pretend that my opinion is fact, that my arguments are logically foolproof, or that I have any special authority to declare it wrong. I think the linguist Thomas Pyles said it best:

Too many of us . . . would seem to believe in an ideal English language, God-given instead of shaped and molded by man, somewhere off in a sort of linguistic stratosphere—a language which nobody actually speaks or writes but toward whose ineffable standards all should aspire. Some of us, however, have in our worst moments suspected that writers of handbooks of so-called “standard English usage” really know no more about what the English language ought to be than those who use it effectively and sometimes beautifully. In truth, I long ago arrived at such a conclusion: frankly, I do not believe that anyone knows what the language ought to be. What most of the authors of handbooks do know is what they want English to be, which does not interest me in the least except as an indication of the love of some professors for absolute and final authority.1”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.

In usage, as in so many other things, you have to learn to live with uncertainty.

Notes   [ + ]

1. ”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.

By

Do Usage Debates Make You Nauseous?

Several days ago, the Twitter account for the Chicago Manual of Style tweeted, “If you’re feeling sick, use nauseated rather than nauseous. Despite common usage, whatever is nauseous induces nausea.” The relevant entry in Chicago reads,

Whatever is nauseous induces a feeling of nausea—it makes us feel sick to our stomachs. To feel sick is to be nauseated. The use of nauseous to mean nauseated may be too common to be called error anymore, but strictly speaking it is poor usage. Because of the ambiguity of nauseous, the wisest course may be to stick to the participial adjectives nauseated and nauseating.

Though it seems like a straightforward usage tip, it’s based on some dubious motives and one rather strange assumption about language. It’s true that nauseous once meant causing nausea and that it has more recently acquired the sense of having nausea, but causing nausea wasn’t even the word’s original meaning in English. The word was first recorded in the early 17th century in the sense of inclined to nausea or squeamish. So you were nauseous not if you felt sick at the moment but if you had a sensitive stomach. This sense became obsolete in the late 17th century, supplanted by the causing nausea sense. The latter sense is the one that purists cling to, but it too is going obsolete.

I searched for nauseous in the Corpus of Contemporary American English and looked at the first 100 hits. Of those 100 hits, only one was used in the sense of causing nausea: “the nauseous tints and tinges of corruption.” The rest were all clearly used in the sense of having nausea—“I was nauseous” and “it might make you feel a little nauseous” and so on. Context is key: when nauseous is used with people, it means that they feel sick, but when it’s used with things, it means they’re sickening. And anyway, if nauseous is ambiguous, then every word with multiple meanings is ambiguous, including the word word, which has eleven main definitions as a noun in Merriam-Webster’s Collegiate. So where’s this ambiguity that Chicago warns of?

The answer is that there really isn’t any. In this case it’s nothing more than a red herring. Perhaps it’s possible to concoct a sentence that, lacking sufficient context, is truly ambiguous. But the corpus search shows that it just isn’t a problem, and thus fear of ambiguity can’t be the real reason for avoiding nauseous. Warnings of ambiguity are often used not to call attention to a real problem but to signal that a word has at least two senses or uses and that the author does not like one of them. Bryan Garner (the author of the above entry from Chicago), in his Modern American Usage, frequently warns of such “skunked” words and usually recommends avoiding them altogether. This may seem like sensible advice, but it seems to me to be motivated by a sense of jealousy—if the word can’t mean what the advice-giver wants it to mean, then no one can use it.

But the truly strange assumption is that words have meaning that is somehow independent of their usage. If 99 percent of the population uses nauseous in the sense of having nausea, then who’s to say that they’re wrong? Who has the authority to declare this sense “poor usage”? And yet Garner says, rather unequivocally, “Whatever is nauseous induces a feeling of nausea.” How does he know this is what nauseous means? It’s not as if there is some platonic form of words, some objective true meaning from which a word must never stray. After all, language changes, and an earlier form is not necessarily better or truer than a newer one. As Merriam-Webster editor Kory Stamper recently pointed out on Twitter, stew once meant “whorehouse”, and this sense dates to the 1300s. The food sense arose four hundred years later, in the 1700s. Is this poor usage because it’s a relative upstart supplanting an older established sense? Of course not.

People stopped using nauseous to mean “inclined to nausea” several hundred years ago, and so it no longer means that. Similarly, most people no longer use nauseous to mean “causing nausea”, and so that meaning is waning. In another hundred years, it may be gone altogether. For now, it hangs on, but this doesn’t mean that the newer and overwhelmingly more common sense is poor usage. The new sense is only poor usage inasmuch as someone says it is. In other words, it all comes down to someone’s opinion. As I’ve said before, pronouncements on usage that are based simply on someone’s opinion are ultimately unreliable, and any standard that doesn’t take into account near-universal usage by educated speakers in edited writing is doomed to irrelevance.

So go ahead and use nauseous. The “having nausea” sense is now thoroughly established, and it seems silly to avoid a perfectly good word just because a few peevers dislike it. Even if you stick to the more traditional “causing nausea” sense, you’re unlikely to confuse anyone, because context will make the meaning clear. Just be careful about people who make unsupported claims about language.

%d bloggers like this: