Arrant Pedantry

By

Celtic and the History of the English Language

A little while ago a link to this list of 23 maps and charts on language went around on Twitter. It’s full of interesting stuff on linguistic diversity and the genetic relationships among languages, but there was one chart that bothered me: this one on the history of the English language by Sabio Lantz.

The Origins of English

The first and largest problem is that the timeline makes it look as though English began with the Celts and then received later contributions from the Romans, Anglo-Saxons, Vikings, and so on. While this is a decent account of the migrations and conquests that have occurred in the last two thousand years, it’s not an accurate account of the history of the English language. (To be fair, the bar on the bottom gets it right, but it leaves out all the contributions from other languages.)

English began with the Anglo-Saxons. They were a group of Germanic tribes originating in the area of the Netherlands, northern Germany, and Denmark, and they spoke dialects of what might be called common West Germanic. There was no distinct English language at the time, just a group of dialects that would later evolve into English, Dutch, German, Low German, and Frisian. (Frisian, for the record, is English’s closest relative on the continent, and it’s close enough that you can buy a cow in Friesland by speaking Old English.)

The inhabitants of Great Britain when the Anglo-Saxons arrived were mostly romanized Celts who spoke Latin and a Celtic language that was the ancestor of modern-day Welsh and Cornish. (In what is now Scotland, the inhabitants spoke a different Celtic language, Gaelic, and perhaps also Pictish, but not much is known about Pictish.) But while there were Latin- and Celtic-speaking people in Great Britain before the Anglo-Saxons arrived, those languages probably had very little influence on Old English and should not be considered ancestors of English. English began as a distinct language when the Anglo-Saxons split off from their Germanic cousins and left mainland Europe beginning around 450 AD.

For years it was assumed that the Anglo-Saxons wiped out most of the Celts and forced the survivors to the edges of the island—Cornwall, Wales, and Scotland. But archaeological and genetic evidence has shown that this isn’t exactly the case. The Anglo-Saxons more likely conquered the Celts and intermarried with them. Old English became the language of government and education, but Celtic languages may have survived in Anglo-Saxon–occupied areas for quite some time.

From Old to Middle English

Old English continues until about 1066, when the Normans invaded and conquered England. At that point, the language of government became Old French—or at least the version of it spoken by the Normans—or Medieval Latin. Though peasants still spoke English, nobody was writing much in the language anymore. And when English made a comeback in the 1300s, it had changed quite radically. The complex system of declensions and other inflections from Old English were gone, and the language had borrowed considerably from French and Latin. Though there isn’t a firm line, by the end of the eleventh century Old English is considered to have ended and Middle English to have begun.

The differences between Old English and Middle English are quite stark. Just compare the Lord’s Prayer in each language:

Old English:

Fæder ure þu þe eart on heofonum;
Si þin nama gehalgod
to becume þin rice
gewurþe ðin willa
on eorðan swa swa on heofonum.
urne gedæghwamlican hlaf syle us todæg
and forgyf us ure gyltas
swa swa we forgyfað urum gyltendum
and ne gelæd þu us on costnunge
ac alys us of yfele soþlice
(source)

(The character that looks like a p with an ascender is called a thorn, and it is pronounced like the modern th. It could be either voiceless or voiced depending on its position in a word. The character that looks like an uncial d with a stroke through it is also pronounced just like a thorn, and the two symbols were used interchangeably. Don’t ask me why.)

Middle English:

Oure fadir that art in heuenes,
halewid be thi name;
thi kyngdoom come to;
be thi wille don,
in erthe as in heuene.
Yyue to vs this dai oure breed ouer othir substaunce,
and foryyue to vs oure dettis,
as we foryyuen to oure dettouris;
and lede vs not in to temptacioun,
but delyuere vs fro yuel. Amen.
(source)

(Note that u and v could both represent either /u/ or /v/. V was used at the beginnings of words and u in the middle. Thus vs is “us” and yuel is “evil”.)

While you can probably muddle your way through some of the Lord’s Prayer in Old English, there are a lot of words that are unfamiliar, such as gewurþe and soþlice. And this is probably one of the easiest short passages to read in Old English. Not only is it a familiar text, but it dates to the late Old English period. Older Old English text can be much more difficult. The Middle English, on the other hand, is quite readable if you know a little bit about Middle English spelling conventions.

And even where the Old English is readable, it shows grammatical inflections that are stripped away in Middle English. For example, ure, urne, and urum are all forms of “our” based on their grammatical case. In Middle English, though, they’re all oure, much like Modern English. As I said above, the change from Old English to Middle English was quite radical, and it was also quite sudden. My professor of Old English and Middle English said that there are cases where town chronicles essentially change from Old to Middle English in a generation.

But here’s where things get a little murky. Some have argued that the vernacular language didn’t really change that quickly—it was only the codified written form that did. That is, people were taught to write a sort of standard Old English that didn’t match what they spoke, just as people continued to write Latin even as they were speaking the evolving Romance dialects such as Old French and Old Spanish.

So perhaps the complex inflectional system of Old English didn’t disappear suddenly when the Normans invaded; perhaps it was disappearing gradually throughout the Old English period, but those few who were literate learned the old forms and retained them in writing. Then, when the Normans invaded and people mostly stopped writing in English, they also stopped learning how to write standard Old English. When they started writing English again a couple of centuries later, they simply wrote the language as it was spoken, free of the grammatical forms that had been artificially retained in Old English for so long. This also explains why there was so much dialectal variation in Middle English; because there was no standard form, people wrote their own local variety. It wasn’t until the end of the Middle English period that a new standard started to coalesce and Early Modern English was born.

Supposed Celtic Syntax in English

And with that history established, I can finally get to my second problem with that graphic above: the supposed Celtic remnants in English. English may be a Germanic language, but it differs from its Germanic cousins in several notable ways. In addition to the glut of French, Latin, Greek, and other borrowings that occurred in the Middle and Early Modern English periods, English has some striking syntactic differences from other Germanic languages.

English has what is known as the continuous or progressive aspect, which is formed with a form of be and a present participle. So we usually say I’m going to the store rather than just I go to the store. It’s rather unusual to use a periphrastic—that is, wordy—construction as the default when there’s a shorter option available. Many languages do not have progressive forms at all, and if they do, they’re used to specifically emphasize that an action is happening right now or is ongoing. English, on the other hand, uses it as the default form for many types of verbs. But in German, for example, you simply say Ich gehe in den Laden (“I go to the store”), not Ich bin gehende in den Laden (“I am going to the store”).

English also makes extensive use of a feature known as do support, wherein we insert do into certain kinds of constructions, mostly questions and negatives. So while German would have Magst du Eis? (“Like you ice cream?”), English inserts a dummy do: Do you like ice cream? These constructions are rare cross-linguistically and are very un-Germanic.

And some people have come up with a very interesting explanation for this unusual syntax: it comes from a Celtic substrate. That is, they believe that the Celtic population of Britain adopted Old English from their Anglo-Saxon conquerors but remained bilingual for some time. As they learned Old English, they carried over some of their native syntax. The Celtic languages have some rather unusual syntax themselves, highly favoring periphrastic constructions over inflected ones. Some of these constructions are roughly analogous to the English use of do support and progressive forms. For instance, in Welsh you might say Dwi yn mynd i’r siop (“I am in going to the shop”). (Disclaimer: I took all of one semester in Welsh, so I’m relying on what little I remember plus some help from various websites on Welsh grammar and a smattering of Google Translate.)

While this isn’t exactly like the English equivalent, it looks close. Welsh doesn’t have present participial forms but instead uses something called a verbal noun, which is a sort of cross between an infinitive and gerund. Welsh also uses the particle yn (“in”) to connect the verbal noun to the rest of the sentence, which is actually quite similar to constructions from late Middle and Early Modern English such as He was a-going to the store, where a- is just a worn-down version of the preposition on.

But Welsh uses this construction in all kinds of places where English doesn’t. To say I speak Welsh, for example, you say Dw’i’n siarad Cymraeg, which literally translated means I am in speaking Welsh. In English the progressive stresses that you are doing something right now, while the simple present is used for things that are done habitually or that are generally true. In Welsh, though, it’s unmarked—it’s simply a wordier way of stating something without any special progressive meaning. Despite its superficial similarities to the English progressive, it’s quite far from English in both use and meaning. Additionally, the English construction may have much more mundane origins in the conflation of gerunds and present participles in late Middle English, but that’s a discussion for another time.

Welsh’s use of do support—or, I should say, gwneud support—even less closely parallels that of English. In English, do is used in interrogatives (Do you like ice cream?), negatives (I don’t like ice cream), and emphatic statements (I do like ice cream), and it also appears as a stand-in for whole verb phrases (He thinks I don’t like ice cream, but I do). In Welsh, however, gwneud is not obligatory, and it can be used in simple affirmative statements without any emphasis.

Nor is it always used where it would be in English. Many questions and negatives are formed with a form of the be verb, bod, rather than gwneud. For example, Do you speak Welsh? is Wyt ti’n siarad Cymraeg? (“Are you in speaking Welsh?”), and I don’t understand is Dw i ddim yn deall (“I am not in understanding”). (This is probably simply because Welsh uses the pseudo-progressive in the affirmative form, so it uses the same construction in interrogatives and negatives, much like how English would turn “He is going to the store” into “Is he going to the store?” or “He isn’t going to the store.” Do is only used when there isn’t another auxiliary verb that could be used.)

But there’s perhaps an even bigger problem with the theory that English borrowed these constructions from Celtic: time. Both the progressive and do support start to appear in late Middle English (the fourteenth and fifteenth centuries), but they don’t really take off until the sixteenth century and beyond, over a thousand years after the Anglo-Saxons began colonizing Great Britain. So if the Celtic inhabitants of Britain adopted English but carried over some Celtic syntax, and if the reason why that Celtic syntax never appeared in Old English is that the written language was a standardized form that didn’t match the vernacular, and if the reason why Middle English looks so different from Old English is that people were now writing the way they spoke, then why don’t we see these Celticisms until the end of the Middle English period, and then only rarely?

Proponents of the Celtic substrate theory argue that these features are so unusual that they could only have been borrowed into English from Celtic languages. They ask why English is the only Germanic language to develop them, but it’s easy to flip this sort of question around. Why did English wait for more than a thousand years to borrow these constructions? Why didn’t English borrow the verb-subject-object sentence order from the Celtic languages? Why didn’t it borrow the after-perfect, which uses after plus a gerund instead of have plus a past participle (She is after coming rather than She has come), or any other number of Celtic constructions? And maybe most importantly, why are there almost no lexical borrowings from Celtic languages into English? Words are the first things to be borrowed, while more structural grammatical features like syntax and morphology are among the last. And just to beat a dead horse, just because something developed in English doesn’t mean you should expect to see the same thing develop in related languages.

The best thing that the Celtic substrate theory has going for it, I think, is that it’s appealing. It neatly explains something that makes English unique and celebrates the Celtic heritage of the island. But there’s a danger whenever a theory is too attractive on an emotional level. You tend to overlook its weaknesses and play up its strengths, as John McWhorter does when he breathlessly explains the theory in Our Magnificent Bastard Tongue. He stresses again and again how unique English is, how odd these constructions are, and how therefore they must have come from the Celtic languages.

I’m not a historical linguist and certainly not an expert in Celtic languages, but alarm bells started going off in my head when I read McWhorter’s book. There were just too many things that didn’t add up, too many pieces that didn’t quite fit. I wanted to believe it because it sounded so cool, but wanting to believe something doesn’t make it so. Of course, none of this is to say that it isn’t so. Maybe it’s all true but there just isn’t enough evidence to prove it yet. Maybe I’m being overly skeptical for nothing.

But in linguistics, as in other sciences, a good dose of skepticism is healthy. A crazy theory requires some crazy-good proof, and right now, all I see is a theory with enough holes in it to sink a fleet of Viking longboats.

By

Book Review: The Sense of Style

Full disclosure: I received an advance review copy of this book from the publisher, Viking.

The Sense of StyleI was intrigued when I first heard that Steven Pinker, the linguist and cognitive scientist, was writing a book on style. I’ve really enjoyed some of his other books, such as The Stuff of Thought, but wasn’t this the guy who had dedicated an entire chapter of The Language Instinct to bashing prescriptivists, calling them a bunch of “kibbitzers and nudniks” who peddle “bits of folklore that originated for screwball reasons several hundred years ago”? But even though it can be satisfying to bash nonsensical grammar rules, I’ve also long felt that linguists could offer some valuable insight into the field of writing. I was hopeful that Pinker would have some interesting things to say about writing, and he didn’t disappoint me.

I should be clear, though, that this is not your ordinary book on writing advice. It isn’t a quick reference book full of rules and examples of what to do and what not to do (for which I recommend Joseph Williams’s excellent Style). It’s something deeper and more substantial than that—it’s a thorough examination of what makes good writing good and why writing well is so hard.

Pinker starts by reverse-engineering some of his favorite passages of prose, taking them apart piece by piece to see what makes them tick. Though it’s an interesting exercise, it gets a little tedious at times as he picks passages apart. However, his point is valuable: good writing can only come from good reading, which means not only reading a lot but engaging with what you read.

He then explores classic style, which he calls “an antidote for academese, bureaucratese, corporatese, legalese, officialese, and other kinds of stuffy prose.” Classic style starts with the assumption that the writer has seen something that they want to show to the reader, so the writer engages in a conversation with the reader to help direct their gaze. It’s not suitable for every kind of writing—for example, a user manual needs just a straightforward list of instructions, not a dialogue—but it works well for academic writing and other kinds of writing in which an author explains a new idea to the reader.

Then Pinker tackles perhaps the most difficult challenge in writing—overcoming the curse of knowledge. The cause of much bad writing, he says, is that the author is so close to the subject that they don’t know how to explain it to someone who doesn’t already know what the author knows. They forget how they came by their knowledge and thus unthinkingly skip key pieces of explanation or use jargon that is obscure or opaque to outsiders. And to make things worse, even being aware of the curse of knowledge isn’t enough to ensure that you’ll write more clearly; that is, you can’t simply tell someone, “Keep the reader in mind!” and expect them to do so. The best solution, Pinker says, is to have test readers or editors who can tell you where something doesn’t make sense and needs to be revised.

The next chapters provide a crash course on syntax and a guide to creating greater textual coherence, and though they occasionally get bogged down in technical details, they’re full of good advice. For example, Pinker uses syntax tree diagrams to illustrate both the cause of and solution to problems like misplaced modifiers. Tree diagrams are much more intuitive than other diagramming methods like Reed-kellog, so you don’t need to be an expert in linguistics to see the differences between two example sentences. And though the guide to syntax is helpful, the chapter on coherence is even better. Pinker explains why seemingly well-written text is sometimes so hard to understand: because even though the sentences appear to hang together just fine, the ideas don’t. The solution is to keep consistent thematic strings throughout a piece, tying ideas together and making the connections between them clear.

The last and by far the longest chapter—it occupies over a third of the book—is essentially a miniature grammar and usage guide prefaced by a primer on the supposed clash between prescriptivism and descriptivism. It’s simultaneously the most interesting and most disappointing chapter in the book. Though it starts rather admirably by explaining the linguistics behind particular usage issues (something I try to do on this blog), it ends with Pinker indulging in some peevery himself. Ironically, some of the usage rules he endorses are no more valid than the ones he debunks, and he gives little justification for his preference, often simply stating that one form is classier. At least it’s clear, though, that these are his personal preferences and not universal laws. The bulk of the chapter, though, is a lucid guide to some common grammar and usage issues. (And yes, he does get in a little prescriptivist bashing.)

Despite some occasional missteps, The Sense of Style is full of valuable advice and is a welcome addition to the genre of writing guides.

By

Interview at Grammarist

Forgive me if you’ve already seen this, but I was interviewed a couple of weeks ago at Grammarist.com. Find out what got me into language blogging, what my greatest accomplishment in the world of language is, and why you should care more about language. Check it out!

By

Sneak Peek: “There Are a Number of Agreement Problems”

Unless you’re a subscriber to Copyediting newsletter, you don’t get the chance to read my “Grammar on the Edge” column. But now you can get a sneak peek of my most recent entry, “There Are a Number of Agreement Problems,” on Copyediting’s website.

You’ll still have to subscribe to get the whole thing, but maybe this will whet your appetite. (And a year’s subscription is only $79.) You’ll also get lots of great content from Erin Brenner, Mark Farrell, Katharine O’Moore-Klopf, and others. Check it out!

By

New Post on Visual Thesaurus: Less Usage Problems

I have a new post on Visual Thesaurus, and this one’s open to non-subscribers:

The distinction between less and fewer is one of the most popular rules in the peevers’ arsenal. It’s a staple of lists of grammar rules that everyone supposedly gets wrong, and sticklers have pressured stores into changing their signs from “10 items or less” to “10 items or fewer.” Students have it drilled into their heads that fewer is for things you can count while less is for things you can’t. But there’s a problem: the rule as it’s commonly taught is wrong, and it’s dulling our sense of what’s actually right.

Go here to read the rest.

By

Is the Oxford Comma Ungrammatical?

Few language issues inspire as much fervent debate as the question of whether you need a comma before the last item in a series, also known as the Oxford, Harvard, or serial comma. This is the comma that you sometimes see before and in lists, such as “I need you to go to the store and get bread, milk, and butter.” The Chicago Manual of Style, which is used by many book publishers and some academic journals, requires the comma. The AP Stylebook, which is used by newspapers and many magazines, omits the comma unless it’s necessary to avoid ambiguity.

It seems like such a trifling thing, yet it inspires impassioned debate among editors and writers on both sides of the issue. Last year the satirical news site the Onion joked about violence between the AP and Chicago gangs and wrote that “an innocent 35-year-old passerby who found himself caught up in a long-winded dispute over use of the serial, or Oxford, comma had died of a self-inflicted gunshot wound.”

But more recently, Walt Hickey at the FiveThirtyEight blog decided to approach the argument in a more scientific fashion, by polling readers on their preference. What he found was that readers are fairly split: 57 percent prefer the serial comma, while 43 percent dislike it. So while there’s a preference for the serial comma, at least among FiveThirtyEight readers, it’s not an overwhelming one.

Roy Peter Clark at the Poynter Institute followed up by arguing that journalists should adopt the serial comma, after which Sam Kirkland posted a poll on Poynter asking if the Associated Press should make the switch. Surprisingly, a whopping 71 percent of respondents said yes.

But none of this was good enough for Poynter blogger Andrew Beaujon. He took to Twitter to lay out his arguments against the serial comma. The first argument—that the serial comma is simply ungrammatical—is actually the easiest to refute. He says that since you can’t write “My wife, and I drove to work” (note the comma before the “and”), you can’t write “Bob, my wife, and I drove to work.” But he’s simply presupposing that there’s a rule that you can never have a comma before a conjunction in a list, which is obviously not true.

He’s also presupposing that lists of two items behave exactly like lists of three or more items, but this is also untrue. I can’t write “Bob, my wife drove to work” (at least not with the intended meaning), but I can certainly write “Bob, my wife and I drove to work.” By Beaujon’s logic, the fact that you add a third item doesn’t magically obviate the rule that you can’t use a comma to join two things together. (Of course, this is done all the time in headline style, but it’s not allowed in normal prose.) It’s clear that the structure of lists changes a bit when you have three items or more, but it is not clear that there is any grammatical rule forbidding commas.

His next point, that “the sentences people employ to show the need for a serial comma are usually ridiculous”, is weak at best. Yes, it’s true that most contrived example sentences are a little ridiculous, but that’s just a problem with contrived example sentences, not with the argument for the serial comma. And consider this example: “The highlights of his global tour include encounters with Nelson Mandela, an 800-year-old demigod and a dildo collector.” It may be a ridiculous sentence, but it’s real.

Beaujon’s final argument is that the serial comma arises from an urge to overpunctuate because we believe readers are too stupid to figure things out on their own. He says that “prescriptivism is not for [the readers’] benefit; its purpose is to make those of us in the publishing game to feel important and necessary”, but how is his own prescriptivism any different? He’s instructing writers and editors in comma rules and telling them that following his rule means they’re good writers. In other words, my writing is so clear that I don’t need a crutch like the serial comma; if you disagree, it’s only because you don’t trust readers and want to make yourself feel important.

But consider this: most people in both the FiveThirtyEight and Poynter polls prefer the serial comma. That means most readers prefer it. If they find it helpful, who are we to argue that it’s some sort of crutch of bad writers or source of job security for copy editors? And AP style does in fact use the serial comma to prevent ambiguity (though apparently not in the above example regarding the late Mr. Mandela), so what’s the harm in using it all the time?

Because the fact is that I often stumble over sentences that lack the serial comma. Even though I’m well aware that AP and other styles omit the comma before the “and”, I still tend to read “bacon and eggs” in “He made muffins, bacon and eggs” as a single item, not the final two items in a list. The sudden end of the list after “eggs” throws me because I was expecting something to follow it.

I suppose you could conclude that I’m an idiot that can’t work out writing on his own, but you could just as easily (and much more charitably) conclude that the serial comma really is helpful because it signals something about the structure of the sentence. In speech, we can rely on a speaker’s prosody—the rise and fall of pitch—to tell us where the syntactic units begin and end. In writing, we have to rely on punctuation marks to serve as signposts.

You can claim all you want that your writing is so clear that it can do without these signposts, but if you leave out too many, your readers may feel lost, wandering through meandering sentences without knowing where they’re going. Did your reader immediately understand what you wrote, or did they stumble, backtrack, and read it again before they got your message?

This isn’t to say there’s one right way to punctuate, and it’s to use the serial comma. As we saw from the polls above, opinion on its use is still fairly divided. But rather than accusing your opponents of distrusting readers or being self-aggrandizing, you could take them at their word. Maybe there really are legitimate reasons to prefer the serial comma, just as there are legitimate reasons to prefer omitting it.

I find the arguments in favor of including the serial comma stronger than the arguments in favoring of leaving it out, but I don’t pretend that my preference is an ironclad grammatical law or proof of my superiority. It’s just that—a preference. You are free to choose for yourself.

By

15% Off All T-Shirts

First, I apologize for not blogging in so long. It’s been a crazy summer, complete with a new baby (yay!), a new job (yay!), and moving to a new house (boo!). I’ve got a few posts in the works and hope to have something done soon.

Second, it’s time for another sale! Now through September 2, get 15 percent off all T-shirts in the Arrant Pedantry Store. Just use the code SHIRTS15 at checkout.

By

Do Usage Debates Make You Nauseous?

Several days ago, the Twitter account for the Chicago Manual of Style tweeted, “If you’re feeling sick, use nauseated rather than nauseous. Despite common usage, whatever is nauseous induces nausea.” The relevant entry in Chicago reads,

Whatever is nauseous induces a feeling of nausea—it makes us feel sick to our stomachs. To feel sick is to be nauseated. The use of nauseous to mean nauseated may be too common to be called error anymore, but strictly speaking it is poor usage. Because of the ambiguity of nauseous, the wisest course may be to stick to the participial adjectives nauseated and nauseating.

Though it seems like a straightforward usage tip, it’s based on some dubious motives and one rather strange assumption about language. It’s true that nauseous once meant causing nausea and that it has more recently acquired the sense of having nausea, but causing nausea wasn’t even the word’s original meaning in English. The word was first recorded in the early 17th century in the sense of inclined to nausea or squeamish. So you were nauseous not if you felt sick at the moment but if you had a sensitive stomach. This sense became obsolete in the late 17th century, supplanted by the causing nausea sense. The latter sense is the one that purists cling to, but it too is going obsolete.

I searched for nauseous in the Corpus of Contemporary American English and looked at the first 100 hits. Of those 100 hits, only one was used in the sense of causing nausea: “the nauseous tints and tinges of corruption.” The rest were all clearly used in the sense of having nausea—“I was nauseous” and “it might make you feel a little nauseous” and so on. Context is key: when nauseous is used with people, it means that they feel sick, but when it’s used with things, it means they’re sickening. And anyway, if nauseous is ambiguous, then every word with multiple meanings is ambiguous, including the word word, which has eleven main definitions as a noun in Merriam-Webster’s Collegiate. So where’s this ambiguity that Chicago warns of?

The answer is that there really isn’t any. In this case it’s nothing more than a red herring. Perhaps it’s possible to concoct a sentence that, lacking sufficient context, is truly ambiguous. But the corpus search shows that it just isn’t a problem, and thus fear of ambiguity can’t be the real reason for avoiding nauseous. Warnings of ambiguity are often used not to call attention to a real problem but to signal that a word has at least two senses or uses and that the author does not like one of them. Bryan Garner (the author of the above entry from Chicago), in his Modern American Usage, frequently warns of such “skunked” words and usually recommends avoiding them altogether. This may seem like sensible advice, but it seems to me to be motivated by a sense of jealousy—if the word can’t mean what the advice-giver wants it to mean, then no one can use it.

But the truly strange assumption is that words have meaning that is somehow independent of their usage. If 99 percent of the population uses nauseous in the sense of having nausea, then who’s to say that they’re wrong? Who has the authority to declare this sense “poor usage”? And yet Garner says, rather unequivocally, “Whatever is nauseous induces a feeling of nausea.” How does he know this is what nauseous means? It’s not as if there is some platonic form of words, some objective true meaning from which a word must never stray. After all, language changes, and an earlier form is not necessarily better or truer than a newer one. As Merriam-Webster editor Kory Stamper recently pointed out on Twitter, stew once meant “whorehouse”, and this sense dates to the 1300s. The food sense arose four hundred years later, in the 1700s. Is this poor usage because it’s a relative upstart supplanting an older established sense? Of course not.

People stopped using nauseous to mean “inclined to nausea” several hundred years ago, and so it no longer means that. Similarly, most people no longer use nauseous to mean “causing nausea”, and so that meaning is waning. In another hundred years, it may be gone altogether. For now, it hangs on, but this doesn’t mean that the newer and overwhelmingly more common sense is poor usage. The new sense is only poor usage inasmuch as someone says it is. In other words, it all comes down to someone’s opinion. As I’ve said before, pronouncements on usage that are based simply on someone’s opinion are ultimately unreliable, and any standard that doesn’t take into account near-universal usage by educated speakers in edited writing is doomed to irrelevance.

So go ahead and use nauseous. The “having nausea” sense is now thoroughly established, and it seems silly to avoid a perfectly good word just because a few peevers dislike it. Even if you stick to the more traditional “causing nausea” sense, you’re unlikely to confuse anyone, because context will make the meaning clear. Just be careful about people who make unsupported claims about language.

By

Celebrate T-Shirt Day with 15% Off

T-shirt day is June 21st, and in preparation for the big day, Spreadshirt is offering 15 percent off all t-shirts when you use the coupon code MYSHIRT2014 between now and June 10th. If you met me at the annual conferences of the American Copy Editors Society and liked my shirts, now’s a good chance to get one for yourself. Go check out what’s available in the Arrant Pedantry Store.

And if you’re not the word-nerd-T-shirt-buying type, don’t worry—a new post is coming soon.

By

Mother’s Day

Today is officially Mother’s Day, and as with other holidays with possessive or plural endings, there’s a lot of confusion about what the correct form of the name is. The creator of Mother’s Day in the United States, Anna Jarvis, specifically stated that it should be a singular possessive to focus on individual mothers rather than mothers in general. But as sociolinguist Matt Gordon noted on Twitter, “that logic is quite peccable”; though it’s a nice sentiment, it’s grammatical nonsense.

English has a singular possessive and a plural possessive; it does not have a technically-plural-but-focusing-on-the-singular possessive. Though Jarvis may have wanted everyone to focus on their respective mothers, the fact is that it still celebrates all mothers. If I told you that tomorrow was Jonathon’s Day, you’d assume that it’s my day, not that it’s the day for all Jonathons but that they happen to be celebrating separately. That’s simply not how grammatical number works in English. If you have more than one thing, it’s plural, even if you’re considering those things individually.

This isn’t the only holiday that employs some grammatically suspect reasoning in its official spelling—Veterans Day officially has no apostrophe because the day doesn’t technically belong to veterans. But this is silly—apostrophes are used for lots of things beyond simple ownership.

It could be worse, though. The US Board on Geographic Names discourages possessives altogether, though it allows the possessive s without an apostrophe. The peak named for Pike is Pikes Peak, which is worse than grammatical nonsense—it’s an officially enshrined error. The worst part is that there isn’t even a reason given for this policy, though presumably it’s because they don’t want to indicate private ownership of geographical features. (Again, the apostrophe doesn’t necessarily show ownership.) But in this case you can’t even argue that Pike is a plural attributive noun, because there’s only one Pike who named the peak.

The sad truth is that the people in charge of deciding where or whether to put apostrophes in things don’t always have the best grasp of grammar, and they don’t always think to consult someone who does. But even if the grammar of Mother’s Day makes me roll my eyes, I can still appreciate the sentiment. In the end, arguing about the placement of an apostrophe is a quibble. What matters most is what the day really means. And this day is for you, Mom.