Arrant Pedantry

By

The “Only” Comma, pt. 1

A little while ago, one of my coworkers came to me with a conundrum. She had come across a sentence like “Ryan founded the company with his brother Scott” in something she was editing, and she couldn’t figure out if “brother” should be followed by a comma. She’d already spent quite a bit of time trying to answer the question, but she was coming up empty-handed.

The problem? She didn’t know how many brothers Ryan had.

If you’re a little baffled by the relationship between commas and how many brothers someone has, you’ve probably never heard of restrictive and nonrestrictive appositives. An appositive is a word or phrase that follows another and modifies it or provides additional information. In this case, the name “Scott” is an appositive for “brother”; it tells you more about the brother’s identity.

Sometimes an appositive provides information that you need in order to understand the sentence, but sometimes it just provides information that’s helpful but not strictly necessary. The Chicago Manual of Style gives these two examples in section 5.23 (the appositives are bolded):

Robert Burns, the poet, wrote many songs about women named Mary.
The poet Robert Burns wrote many songs about women named Mary.

In the first sentence, “the poet” simply provides extra information about Robert Burns, and it could be deleted without affecting the meaning of the sentence. But in the second, “Robert Burns” is necessary. If you cut it out, you wouldn’t know who “the poet” referred to. The former kind of appositive is often called nonrestrictive, while the latter is called restrictive. The second appositive restricts the reference of “the poet” to Robert Burns—that is, it specifies which poet we’re talking about. The first one doesn’t do that, so it’s called nonrestrictive.

The general rule, as it’s presented in The Chicago Manual of Style and elsewhere, is that if there’s more than one thing that the noun could refer to, then the appositive should be restrictive. That is, the appositive needs to specify which of the possible things we’re talking about. If there’s only one thing to which the appositive might refer, then it’s nonrestrictive.

For example, there’s been more than one poet in the history of the earth, so we need a restrictive appositive to tell us that the one in question is Robert Burns. Therefore, going back to my coworker’s problem, if Ryan has more than one brother, then his brother’s name should be restrictive to tell us which of his several brothers we’re talking about, but if he has only one brother, then it should be a nonrestrictive appositive (because there’s only one person that “his brother” could refer to, so the name is just extra information). For this reason, in his book Dreyer’s English, Benjamin Dreyer calls the comma before a nonrestrictive appositive the “only” comma. That is, a comma before “Scott” would tell you that he’s Ryan’s only brother. (Though if “Scott” appears in the middle of a sentence, as in “Ryan and his brother, Scott, founded a company”, then you would need commas on both sides of the appositive to set it off.)

The problem is that this forces editors to waste time doing genealogy work when we really should just be editing. My coworker had already spent who knows how long trying to figure out how many brothers Ryan had, but she couldn’t find anything definitive. So should she put in a comma or not?

I gave her a controversial opinion: I would leave the comma out, because it simply doesn’t matter how many brothers Ryan has. If it were relevant, why wouldn’t the writer have made it more explicit, as in “Ryan founded the company with his only brother, Scott”?

I’m not sure what my coworker ended up doing, but she didn’t seem happy with my heretical opinion on commas. Afterwards, I took to Twitter to voice my opinion that worrying about these commas is a waste of time. The ensuing discussion prompted a friend and fellow editor, Iva Cheung, to make the following cartoon, which she dedicated to me:

(Follow the link to see the mouseover text and bonus.)

It may indeed sound ridiculous, but my coworker is far from the only editor or writer to have grappled with this problem. In a New Yorker piece on the magazine’s famously assiduous fact-checking, John McPhee writes about a similar dilemma. In a book draft, he had written, “Penn’s daughter Margaret fished in the Delaware.” But was that right? He writes, “Should there be commas around Margaret or no commas around Margaret? The presence or absence of commas would, in effect, say whether Penn had one daughter or more than one. The commas—there or missing there—were not just commas; they were facts.”

But as Jan Freeman, a former copyeditor, asked in a column for the Boston Globe, “Were they important facts?” She continues, “How much time should you spend finding the answer—commas or no commas—to a question nobody’s asking?”

That is, is any reader asking how many daughters William Penn had or how many brothers Ryan had? Or, to be more specific, is anyone thinking, “I wonder if the number of brothers Ryan has is exactly equal to one or is some unspecified number greater than one”? And even if they are, are they expecting that information to be communicated via a comma or the lack thereof? I suspected that most people who aren’t editors aren’t reading as much into those commas as we think we’re putting into them, so I turned to Facebook to ask my friends and family members. The results were pretty surprising.

I provided the following sentences and asked what people thought the difference was:

Frank and his brother Steve started a company.
Frank and his brother, Steve, started a company.

Some people said that you use the first sentence if the reader doesn’t know Steve and the second one if they do. Some people said that the latter was always correct and that the former is incorrect or at least more casual. But someone else said that the first sentence looked correct and that the second looked overpunctuated. Another person said that the second sentence gives more emphasis to Frank’s brother. Someone else said that the second implied that the name of Frank’s brother was being provided for the first time and possibly that it’s his only brother, while the first implied that we already know the name of Frank’s brother. But someone else said that she’d use commas if she went into business with one of her brothers, but she’d use no commas if she went into business with her one and only husband. A couple of people said that they thought the issue had to do with whether or not the information in the appositive was needed as a qualifier—that is, whether the sentence makes sense without it. Someone else thought that you don’t need commas if the appositive is short but that you do if it’s longer. Another commenter said that the rule probably varied from one style guide to another. But a few people said they’d read no difference between the two, and one friend responded simply with this gif:

I-DENTICAL!

Out of more than two dozen respondents, only a few answered with the editorially sanctioned explanation: that the first implies that Frank has multiple brothers, while the second implies that he has only one. One person posted this comment: “If a writer wants to convey that Frank has one brother or more, this is an awful way of sneaking in that information. If the information is irrelevant, then I think most readers will not notice the presence or absence of a comma, or conclude anything on that basis, and that’s just fine.”

I think that there are two connected issues here: what the comma means and whether it’s important to communicate that an appositive is the only thing in its class or one of multiple things in its class. And both of them are essentially questions of pragmatics.

Most people think of meaning as something that is simply inherent in words (or punctuation marks) themselves. Put in a comma, and the sentence means one thing. Leave it out, and it means something else. But meaning is a lot messier than this. It depends a lot on what the speaker or writer intends and on how the listener or reader receives it.

In other words, there are really three aspects to meaning: the basic meaning of the utterance itself, known as the locution; the intent of the writer or speaker, known as the illocution; and the way in which the listener or reader interprets the message, known as the perlocution. That is, meaning isn’t found only in the utterance itself; it’s found in the entire exchange between writer and reader.

As I explained in a previous post, sometimes there’s a mismatch between the intended meaning and the form of the utterance itself. For example, if I ask, “Do you know what time it is?”, I’m not literally just checking to see if you have knowledge of the time. I’m asking you to tell me the time, but I’m doing it in a slightly indirect way, because sometimes that’s more polite—maybe I don’t know if you have a watch or phone handy, so I don’t want to presume. In this case, we could say that the illocution (my intent) is “Tell me the time”, even though the locution itself is literally just asking if you know the time, not asking you to tell me the time. Even though my utterance has the form of a yes-or-no question, you’d probably only answer “Yes, I know what time it is” if you were trying to be a smart alec. But people are usually pretty good at reading each other’s intent, so the perlocution—the message you receive—is “Jonathon wants me to tell him the time.”

The comma example is supposedly straightforward. If the writer or editor intends for a comma to indicate that Ryan has only one brother, and if it’s an established convention that that comma indicates that the thing that comes after it is the only thing that the preceding noun could refer to, and if the reader gleans from that comma that Ryan has only one brother, then everything works just as it’s supposed to. But if, for example, the writer intends to communicate that someone has only one spouse but they leave out the comma, then sometimes smart-alecky readers or editors ignore the writer’s obvious intent and insist on an incorrect reading based on the absence of the comma. That is, they ignore the obvious illocution and deliberately misread the text based on a convention that may not be shared by everyone. They’re essentially pretending that meaning comes only from the locution and not from the writer’s intent.

For instance, I remember one time in my basic copyediting course in college when my professor pointed out a book dedication that read something like “To my wife Mary”. She said that the lack of a comma clearly means that the author is a polygamist. I think I was the only one in the class who didn’t laugh at the joke. I just thought it was stupid, because obviously we know that the author isn’t a polygamist. First off, polygamy isn’t legal in the US, so it’s a pretty safe assumption that the author has only one wife. Second, if he had really meant to dedicate the book to one of his multiple wives, he probably would have written something like “To my third wife, Mary”. Pretending to misunderstand someone based on a rule that most readers don’t even know just makes you look like a jerk.

And, judging from the responses I got on Facebook, it appears that most readers are indeed unfamiliar with the rule. Many of them don’t know what the comma is supposed to mean or even that it’s supposed to mean something. Whether the comma has no inherent meaning or has an unclear meaning, there’s a problem with the locution itself. The “only” comma simply isn’t an established convention for most readers.

But there’s a problem with the illocution too, and here’s where the other question of pragmatics comes in to play. Conversation—even if it’s just the sort of one-way conversation that happens between a writer and a hypothetical reader—is generally guided by what linguists call the cooperative principle. And part of this principle is the idea that our contribution to the conversation will be relevant and will be communicated in an understandable manner.

As one of my commenters said, “If a writer wants to convey that Frank has one brother or more, this is an awful way of sneaking in that information.” So we end up with two pragmatic problems: editors are inserting irrelevant information into the text, but readers don’t even pick up on that information because they’re unaware of the convention or don’t anticipate what the editor is trying to communicate. Even when they try to guess the editor’s intent (because it’s almost always the editor putting in or taking out the comma, not the writer), they often guess wrong, because it’s not obvious why someone would be trying to sneak in information like “Ryan has only one brother” in this manner. In effect, the two problems cancel out, and all we’ve done is waste time and possibly annoy our writers and waste their time as well.

And because so few of our readers understand the purpose of the “only” comma, I think it falls firmly into what John McIntyre calls “dog-whistle editing“, which he defines as “attention to distinctions of usage”—or, in this case, punctuation—“that only other copy editors can hear.”

And, as Jan Freeman showed in her Boston Globe column, there’s evidence that this rule is a relatively recent invention. No wonder readers don’t know what the “only” comma means—it’s a convention that editors just made up. And, for the record, I’m not saying that the whole restrictive/nonrestrictive distinction is bunk, but I do think that the “only” comma is the result of an overly literal interpretation of that distinction. (But I’ll save the exploration of the rule’s origins for a future post.)

For now, I think that the solution, as I told my coworker, is to just stop worrying about it. It almost never matters whether someone is someone else’s only brother or daughter or friend or whether a book is someone’s only book, and it’s certainly not worth the time we spend trying to track down that information. Editing is fundamentally about helping the writer communicate with the reader, and I don’t think this rule serves that purpose. Let’s put the dog whistle away and worry about things that actually matter.

By

My Latest for Grammar Girl: “Verbing Nouns and Nouning Verbs”

A lot of people dislike it when nouns like task and dialogue are turned into verbs, but this process has been a normal part of English for centuries. In my latest piece for Grammar Girl, I explain why we should all relax a little about verbing nouns.

Read the whole piece or listen to the episode here.

By

Get 20 Percent Off at the Arrant Pedantry Store

Today and tomorrow only, you can get 20 percent off T-shirts and other items at the Arrant Pedantry Store. Just use the code ANYTHING20 at checkout. And remember that you can customize the design color and even put the designs on other items, including mugs and phone cases. Just hit the pencil icon below the item and then pick the product you want.

By

The Style Guide Alignment Chart

I’ve been thinking a lot about style guides lately, and I decided that what the world really needs right now is the definitive style guide alignment chart. I posted a version on Twitter the other day, but I wanted to do a slightly expanded version here. (Quotes are taken from easydamus.com.)

Lawful good: The Chicago Manual of Style, Neutral Good: The MLA Handbook, Chaotic Good: Buzzfeed Style, Lawful Neutral: The Elements of Style, True Neutral: The Wikipedia Style Guide, Chaotic Neutral: Wired Style, Lawful Evil: The New Yorker Style Guide, Neutral Evil: The AP Stylebook, Chaotic Evil: Publication Manual of the  American Psychological Association

Lawful Good: The Chicago Manual of Style

A lawful good character “combines a commitment to oppose evil with the discipline to fight relentlessly.” And boy howdy, is Chicago relentless—the thing is over 1,100 pages! Even if you use it every day in your job as an editor, there are probably entire chapters that you’ve never looked at. But it’s there with its recommendations just in case.

Neutral Good: The MLA Handbook

“A neutral good character does the best that a good person can do.” Look, the MLA Handbook certainly tries to do what’s right, even if it can’t make up its mind sometimes. Remember when it said you should specify whether a source was print or web, as if that wasn’t obvious from context, and then it took that rule out in the next edition? Enough said.

Chaotic Good: The Buzzfeed Style Guide

“A chaotic good character acts as his conscience directs him with little regard for what others expect of him.” Buzzfeed style is guided by a strong moral compass but doesn’t feel beholden to a lot of traditional rules. It has great entries on gender, race, and disability and would probably recommend singular “they” in that last sentence. It also has entries on celebricat (a celebrity cat), dadbod, and milkshake duck, because that’s the internet for you.

Lawful Neutral: The Elements of Style

“A lawful neutral character acts as law, tradition, or a personal code directs her.” The Elements of Style, a.k.a. Strunk & White, certainly upholds a lot of laws and traditions. Are they good laws? Look, I don’t see how that’s relevant. The point is that if you follow its diktats by omitting needless words and going which hunting, your writing will supposedly be just like E. B. White’s.

True Neutral: The Wikipedia Style Guide

A true neutral character “doesn’t feel strongly one way or the other when it comes to good vs. evil or law vs. chaos.” Wikipedia doesn’t care for your edit wars. There are lots of acceptable style choices, whether you prefer American or British English. Just pick a style and stick with it.

Chaotic Neutral: Wired Style

A chaotic neutral character “avoids authority, resents restrictions, and challenges traditions.” Wired Style has a chapter called “Be Elite” and another called “Screw the Rules.” The first edition is also printed on day-glow yellow paper, because screw your eyes too. It also has a chapter called “Anticipate the Future” but probably didn’t anticipate that it would go out of print twenty years ago.

Lawful Evil: The New Yorker

A lawful evil character “plays by the rules but without mercy or compassion.” The New Yorker uses jarring diereses to prevent misreading of words that no one has trouble reading, and it doubles consonants in words like focussed because it said so, that’s why. It also unnecessarily sets off certain phrases with commas based on a hyperliteral idea of what restrictive and nonrestrictive mean. Tell me that’s not mercilessly evil.

Neutral Evil: The Associated Press Stylebook

“A neutral evil villain does whatever she can get away with.” The AP Stylebook used to say that two things couldn’t collide unless they were both in motion, and it also used to recommend against not only split infinitives but also adverbs placed in the middle of verb phrases, which is the normal place to put them. They only abandoned those rules when John McIntyre finally called them on that BS.

Chaotic Evil: Publication Manual of the American Psychological Association

A chaotic evil character is “arbitrarily violent” and “unpredictable.” Have you ever seen APA-style references? Some titles are in title case, while others are in sentence case. And, for reasons I can’t understand, volume numbers are italicized but issues numbers aren’t, even though there’s no space between them. “Arbitrarily violent” is the best description of that mess that I’ve seen.

Naturally, there will be some disagreement over the placement of some entries. I’ve also had a lot of calls to include Bluebook, with most people wanting to put it somewhere on the evil axis, while others have wanted to include The Yahoo! Style Guide, The Microsoft Manual of Style, or AMA Manual of Style. I’ve decided that I’m probably going to have to do a yearly update to add new entries or move some to more fitting spots. In the meantime, if you’ve got opinions—and I’m sure you do—feel free to chime in below.

By

That’s My Name; Please Wear It Out

Regular readers of this blog have probably noticed that my name has a slightly unusual spelling: it’s Jonathon rather than Jonathan. If you’ve ever been tempted to joke that my parents spelled my name wrong, please don’t. I’ve been hearing that joke for over thirty years now, and I can promise you that it wasn’t funny even the first time.

But in a way the jokers are right. I’m named after the Old Testament figure (the son of Saul and friend of David), whose name is usually rendered Jonathan in English translations of the Bible. My parents thought the -on form was the usual spelling, so that’s what they put on my birth certificate. But I happen to like the spelling of my name, and, anyway, it’s a legitimate variant. The NameVoyager on Baby Name Wizard shows that it’s been around since at least the 1940s or ’50s, though it’s never rivaled Jonathan in popularity. I’ve been asked if the unusual spelling of my name helped propel me to become an editor because I had to pay extra attention to the spelling, but I don’t think it’s true. It makes a nice story, though.

However, my name does serve as sort of a miniature editing test for those times when I’m hiring editorial interns. I’m usually pretty generous with who I invite to come take our editing test, but applicants who address their emails to Jonathan Owens never seem to do as well on it. If you’re applying to an editing job, you’d do well to make sure you spell the hiring manager’s name right.

But I’ve long since resigned myself to the fact that most people won’t spell it right without help. I don’t usually bother to spell it for people in situations where it doesn’t matter, like when someone is taking my order at a fast-food place and they just need to get it close enough that they can call out my name correctly. (Though I appreciate when they ask how to spell it anyway.)

Occasionally I’ll get it spelled right, but more often I get Jonathan or Johnathan or Johnathin or some other weird spelling that makes me wonder if the person writing it has ever seen the name before. For years the weirdest version I’d ever gotten was Jhonathen, but just a couple of months ago I got a receipt that said Jouhathine. I’m not sure that one will ever be topped.

But the one thing that I can’t stand is people automatically shortening my name to Jon. Though, in all honesty, sometimes it’s just as annoying when they ask if they can shorten it. On a couple of occasions I’ve had conversations like this:

Arby’s cashier: Can I get a name?
Me: Jonathon.
Arby’s cashier: Can I put John? I don’t want to butcher it.
Me, mentally: You kind of just did.

It’s annoying enough when I give my name to the cashier at Arby’s as Jonathon and they put Jon or John* on my receipt, but it really grates when I introduce myself to someone as Jonathon and they immediately call me Jon. You’d be surprised how often I’ve had exchanges that go like this:

Them: What’s your name?
Me: Jonathon.
Them: Jon? Nice to meet you.

Did I not enunciate well enough? Was their attention span so short that they could only manage to catch the first syllable? Do they just assume that anybody with a name as long as mine—three whole syllables!—naturally prefers a short form, even though I didn’t give them one? And then I always feel like a jerk for correcting them, even though I shouldn’t have to. (Side note: There was a lot of gratuitous backstorification in Solo: A Star Wars Story, but the part that annoyed me the most was when Han learns Chewbacca’s name and then decides to call him Chewie—without asking if he was okay with it!—because Chewbacca is just too long.)

The funny thing is that I tried to go by Jon once when I was a kid, and it didn’t go well. We had moved to Utah during the summer and were living with my grandma while we saved for a house. On the first day of second grade in my new school, my teacher asked if I preferred Jon or Jonathon. On a whim, I said Jon, so that’s what everyone called me. The only problem is that I wasn’t used to going by Jon—my family only ever called me Jonathon—so when people said my name, it always took me a second to realize that they were talking to me. But by then it was too late to do anything about it. I felt too embarrassed to announce to the class that, on second thought, I preferred Jonathon after all.

Thankfully, we moved into our own place just a few weeks into the school year, so I was able to start over at a new school, once again as Jonathon.

And that’s how I’ve remained ever since. Maybe you’re dying to point out that it looks like a misspelling to you, or you might be itching to ditch those extra syllables and just call me Jon, but please refrain. I’m happy with my name just how it is.


* You may be surprised to learn that the names Jonathan and John are unrelated. Jonathan comes from the Hebrew יְהוֹנָתָן‎ (Yehonatan) or יוֹנָתָן‎ (Yonatan), meaning ‘Jehovah has given’. John, on the other hand, comes from the Hebrew יוֹחָנָן‎ (Yochanan), meaning ‘God is gracious’. But because of their similar forms, people conflate Jon and John and then start spelling Jonathan like Johnathan.

By

20 Percent Off at the Arrant Pedantry Store

There’s a sale going on at the Arrant Pedantry Store today and tomorrow only. Just use the code LOVE19 at checkout to get 20 percent off any order—there’s no minimum purchase.

And if you haven’t visited the store in a while, you might want to check out some of my new designs. Take a look!

Ask me about the Great Vowel Shift. Ask me about linguistics.
Ask me about the Oxford comma. I (manicule) OT

By

Science and Shit

A recent discussion on Twitter about whether the line “I’m gonna have to science the shit out out of this” was in Andy Weir’s book The Martian or was only found in the movie reminded me of one of my favorite facts: science and shit are related. So let’s science the shit out of this etymology.

It all starts (as so many of these things do) with Proto-Indo-European. The root *skey meant ‘to cut, split, separate’. The extended form *skeyd became scit in Old English. The sc sequence was originally pronounced /sk/ in Old English and other Germanic languages, but it eventually became pronounced /ʃ/ (the “sh” sound) in Old English. The sh spelling came later under the influence of French scribes. But despite those minor spelling changes, the word has remained virtually unchanged in over a thousand years. You could travel back to Anglo-Saxon times, and they would understand you if you said shit.

So how did a root meaning ‘to cut, split, separate’ come to mean ‘feces’? From the notion of separating it from your body. The same metaphor is found in the Latin excrementum, which employs the unrelated root meaning ‘to sift, separate’.

This means that shit probably started out as a euphemism. Speakers of Proto-Indo-European or Proto-Germanic may have talked about needing to go separate something rather than use a more unsavory term. In English, shit was fairly neutral for a long while and apparently didn’t become taboo until around 1600, at which point it mostly disappeared from print. It isn’t found in Shakespeare’s plays or in the King James Bible.

Euphemisms often become sullied by the connotations of the thing they’re euphemizing, which leads to the need for new euphemisms, a process sometimes called the euphemism treadmill. So even if shit started life as a polite way to talk about defecation, it eventually became a rather crude one.

(By the way, the “ship high in transit” etymology is pure . . . well, you know. Kory Stamper’s excellent book Word by Word covers this and other bogus acronymic etymologies in more detail.)

In Latin, the PIE root *skey gave rise to the verb scire ‘to know, to understand’. It probably developed from ‘separate’ to ‘distinguish’ or ‘discern’ (that is, ‘tell things apart’) and then to the more general sense of ‘know’.

A noun form of the present participle of scire, scientia, originally meant the state of knowing—that is, ‘knowledge’. Scientia became science in French, which was then borrowed into English. In English it came to mean not just knowledge but the body of knowledge or the process of gaining new knowledge through the scientific method.

The Latin scire gives us a whole bunch of other words too, including conscience (from conscire ‘to know well, to be aware, to have on one’s conscience’), conscious (also from conscire), prescient (‘knowing beforehand’), and nescient (‘not knowing, ignorant’). A related form, nescius is also, surprisingly, the origin of nice, which is a great example of just how much meanings can change over time. Though it originally meant ‘ignorant’, it shifted through ‘foolish’ to ‘lascivious, wanton’ to ‘showy, ostentatious’ to ‘refined’ and then ‘well mannered’ or ‘kind’. The Oxford English Dictionary records many more obsolete senses. A different descendent of *skey yielded the Latin scandula, which later became scindula and was then borrowed into English, where it became shincle and then shingle (from the notion of splitting off a thin piece of wood).

In Ancient Greek, the root *skey yielded schism (meaning a division between people, often in a religious organization) and shizo-, as in schizophrenia (literally ‘a splitting of the mind’).

Back in English, *skey also yielded shed (meaning ‘to cast off’, as in shedding skin, but not the shed meaning a storage building). It probably also gave us sheath (from the notion of a split piece of wood in which a sword is inserted). The Online Etymology Dictionary says it also gives us shin (from the sense of ‘thin piece’, though that’s a little opaque to me). And it’s the source of the word share, from the notion of dividing what you have with someone else. It also gives us shiver (in the sense of a small chip or fragment of wood), which still appears as a dialectal word for ‘splinter’.

In Old Norse, *skey yielded skið also meaning ‘piece of wood’, which eventually gave us the word ski.

And *skey appears to be a variant of another root, *sek, meaning ‘to cut’, which gives us a whole host of other words like section and segment and saw, but I should probably cut this post off somewhere and save some things for another day.

By

An Etymological Workout

If you’re like me and are still trying to get back into the swing of things after a nice holiday break, you might be having a little trouble focusing on work. You might even be suffering from a mild case of ergophobia, or the fear of work. So here’s some etymology to distract you.

Work comes from the Proto-Germanic *werkam, which in turn comes from the Proto-Indo-European *wérǵom, ultimately from the root *werǵ ‘to make’. In Ancient Greek, *wérǵom gave rise to ergon, which gives us energy, from the prefix en- ‘at’ + erg ‘work’ (‘at work, active’), as well as terms like ergonomics and ergative (and, yes, ergophobia). It also apparently gives us the name George, a name meaning ‘farmer’ or ‘husbandman’, which comes from ge ‘earth’ + ergon ‘work’, literally ‘earth worker’.

Forms of ergon also gave us surgery (from earlier chirurgerie, from the Greek kheir ‘hand’ + ergon ‘work’), metallurgy (‘metal work’), liturgy (‘public work’ or ‘public worship’), thaumaturge (‘wonder worker’), dramaturge (‘drama worker’), demiurge (‘public worker’, from a different root meaning ‘public’ than the one in liturgy), “argon” (from the prefix a- ‘not’ + ergon ‘work’, because argon is inert), lethargy (from leth ‘to forget’ + argos ‘not working, idle’), allergy (‘other working’), and synergy (‘working together’).

A variant of the PIE *werǵ, *worg, also produced the Ancient Greek organon, meaning ‘instrument’ or ‘tool’, which eventually made its way into English as organ (meaning the musical instrument, the body parts, and other senses). From this we also get the verb organize, which originally meant ‘to put in working order’, as well as other derived forms like organic and organism.

It also gave us orgy, which originally meant ‘secret rites’, probably from the sense of some kind of work performed for one’s gods. The Online Etymology Dictionary says: “OED says of the ancient rites that they were ‘celebrated with extravagant dancing, singing, drinking, etc.,’ which gives ‘etc.’ quite a workout.” (This root did not, however, give us the word orgasm.)

The Proto-Indo-European *wérǵom also yielded the Germanic bulwark (literally ‘bole work’ or ‘tree work’), which originally meant a defensive wall made of logs. This word was borrowed into English either from Middle Dutch or from Middle High German. It was also borrowed into French and became boulevard, with an anomalous change from /k/ to /d/ at the end. It eventually came to mean a tree-lined street and was then borrowed back into English.

And, of course, it also yields the English wright, meaning ‘worker’ or ‘maker’, and the archaic wrought, which is an old past-tense form of work and not a past-tense form of wreak as some mistakenly believe.

So that one little root from Proto-Indo-European has been pretty productive. I should probably try to be too.

By

Black Friday Sale at the Arrant Pedantry Store

It’s Black Friday (ugh), but from now through Sunday, everything at the Arrant Pedantry Store is 15 percent off (yay!). Now’s a great chance to get a word-nerdy shirt for that special someone in your life (or for yourself). Just use the code CYBER18 at checkout. Or if you wait until Monday, you can get 15 percent off and free shipping, which I think is the best sale that Spreadshirt has ever offered. Use the code CYBERSALE on Monday to get that deal.

And don’t forget that you can customize products. Just hit the Customize button, find the design you want, and put it on whatever product you want. You can put Battlestar Grammatica on an iPhone case or I Could Care Fewer on a tote bag.

Check it out!


By

100,000 Words Whose Pronunciations Have Changed

We all know that language changes over time, and one of the major components of language change is sound change. Many of the words we use today are pronounced differently than they were in Shakespeare’s or Chaucer’s time. You may have seen articles like this one that list 10 or 15 words whose pronunciations have changed over time. But I can do one better. Here are 100,000 words that illustrate how words change.

  1. a: Before the Great Vowel Shift, the name of the first letter of the alphabet was pronounced /aː/, much like when the doctor asks you to open your mouth and say “ah” to look down your throat. In Old English, it was /ɑː/, which is pronounced slightly further back in the mouth. The name of the letter was borrowed from Latin, which introduced its alphabet to much of Europe. The Romans got their alphabet from the Greeks, probably by way of the Etruscans. But unlike the Greeks, the Romans simply called the letters by the sounds they made. The corresponding Greek letter, alpha, got its name from the Phoenician aleph, meaning ‘ox’, because the letter aleph represented the first sound in the word aleph. In Phoenician this was a glottal stop (which is not written in the Latin alphabet). The Greeks didn’t use this sound, so they borrowed it for the /a/ sound instead.
  2. a: This casual pronunciation of the preposition of goes back at least to the 1200s. It doesn’t appear in writing much, except in dialogue, where it’s usually attached to another word, as in kinda. But of itself comes from an unstressed form of the Old English preposition æf. Æf didn’t survive past Old English, but in time a new stressed form of of arose, giving us the preposition off. Of and off were more or less interchangeable until the 1600s, at which point they finally started to diverge into two distinct words. Æf is cognate with the German ab, and these ultimately come from the Proto-Indo-European *h₂epó ‘off, away, from’, which is also the source of the Greek apo (as in apostasy) and the Latin ab (as in abuse). So the initial laryngeal sound in *h₂epó disappeared after changing the following vowel to /a/, the final /o/ disappeared, the /p/ fricatized to /f/, the vowel moved back and reduced, the /f/ became voiced to /v/, and then the /v/ fell away, leaving only a schwa, the barest little wisp of a word.
  3. a: The indefinite article a comes from an unstressed version of the numeral one, which in Old English was ān, though it also inflected for gender, number, and case, meaning that it could look like āne, ānum, ānes, ānre, or ānra. By Middle English those inflections were gone, leaving only an. The /n/ started to disappear before consonants starting in the 1100s, giving us the a/an distinction we have today. But the Old English ān came from an earlier Proto-Germanic *ainaz. The az ending had disappeared by Old English, and the diphthong /ai/ smoothed and became /ɑ:/. In its use as an article, its vowel shortened and eventually reduced to a schwa. But in its use as a numeral, it retained a long vowel, which eventually rose to /o:/ and then broke into the diphthong /wʊ/ and then lowered to /wʌ/, giving us the modern word one. The Proto-Germanic *ainaz goes further back to the Proto-Indo-European *óynos, so between PIE and Proto-Germanic the vowels lowered and the final /s/ became voiced.
  4. aback: This adverb comes from the prefix a- and the noun back. The prefix a- comes from an unstressed form of the preposition on which lost its final /n/ and reduced to a schwa. This prefix also appears in words like among, atop, awake, and asleep. On comes from the Proto-Germanic *ana, which in turn comes from the Proto-Indo-European **h₂en-, which is also the source of the Greek ana-, as in analog and analyze. As with *h₂epó, the initial laryngeal sound changed the vowel to /a/ and then disappeared. Back, on the other hand, has changed remarkably little in the last thousand years. It was spelled bæc in Old English and was pronounced just like the modern word. It comes from a Proto-Germanic word *baka, though its ultimate origin is unknown.

Hopefully by now you see where I’m going with this. It’s interesting to talk about how words have changed over the years, but listicles like “10 Words Whose Pronunciations Have Changed” can be misleading, because they imply that changes in pronunciation are both random and rare. Well, sound changes are random in a way, in that it’s hard to predict what will change in the future, but they’re not random in the sense that they affect random words. Sound changes are just that—changes to a sound in the language, like /r/ disappearing after vowels or /t/ turning into a flap in certain cases in the middle of words. Words can randomly change too, but that’s the exception rather than the rule.

And sound changes aren’t something that just happen from time to time, like the Great Vowel Shift. They’re happening continuously, and they have been happening since the beginning of language. If you like really deep dives (or if you need something to combat your insomnia), this Wikipedia article details the sound changes that have happened between late Proto-Germanic, spoken roughly 2,000 years ago, and the present day, when changes like th-fronting in England (saying fink for think) and the Northern Cities Shift in the US are still occurring.

So while it’s okay to talk about individual words whose pronunciations have changed, I think we shouldn’t miss the bigger picture: it’s language change all the way down.

%d bloggers like this: