A while ago at work, I ran into a common problem: trying to decide whether to stop editing out a usage I don’t like. In this case, it was a particular use of “as such” that was bothering me. To me, “as such” is a prepositional phrase, and “such” is a pronoun that must refer to some sort of noun or noun phrase, as in “I’m a copy editor; as such, I fix bad writing.” In this sentence, “such” refers to the noun phrase “a copy editor”; in other words, it means, “I’m a copy editor; as a copy editor, I fix bad writing.”
But most of the time when I encounter it nowadays, it’s simply used to mean “therefore” or “consequently” (for more on that, see this post I wrote several years ago for Visual Thesaurus). And when I encountered it on that day, I changed it, as I always had before. But this time, I kept thinking about what makes a usage right or wrong and how we as editors decide which rules to enforce and which ones to let slide.
“As such” may be a simple transitional adverb for most people, but I still reflexively look for a noun phrase for that “such” to refer to. And I do this even though I know I’m in the minority. I can look at the evidence and see that the shift has happened, but it hasn’t happened in my own mental grammar.
And I think this tells us a lot about why it’s so hard for us to change our minds about usage. Knowing that I’m in the minority hasn’t magically changed how the phrase works in my head. Some things are so habitual that it’s hard to root them out. And of course there’s more than a bit of snobbery at work too—the adverbial use of “as such” sounds less educated to me, so I don’t have much incentive to give up my meaning for the new one.
Sometimes editors insist that it’s our job to preserve older meanings and slow language change, but I don’t believe it is. Nobody hired us to preserve the language. We’ve simply been hired to fix errors and make text clear and readable. And anyway, changing “as such” to “therefore” might make me feel slightly less annoyed, but it’s not going to have any measurable impact on Standard English. Even if all the copy editors in the English-speaking world were to edit it out, it will likely continue to thrive in speech and unedited text. The rest of the language will keep marching on without us.
Some editors might say that even though usage is changing, the new meaning isn’t correct or accepted yet, as if there will come some point at which it becomes correct or accepted and then everything will magically change. But the question of what’s correct or accepted is much less clear than most people realize.
What makes a particular usage correct? Is it official sanction by usage commentators? Inclusion in a reputable dictionary or style guide? Usage by well-regarded writers or some other elites? A critical mass of popular usage? Some combination of the above? And even those questions raise other questions. What if one usage commentator accepts it and another doesn’t? How do you know if a dictionary or style guide is reputable? How many well-regarded writers need to use it, and for how long? How big a mass of popular usage do you need before you decide it’s a critical one? Is it a simple majority, or maybe 75 percent or 90 percent? Does it matter if the rule in question has some sort of history behind it or if it’s a pure invention? Does it matter if the allegedly incorrect usage arose from ignorance or by some other means? Does it matter how vociferously people object to the allegedly incorrect usage?
The questions go on and on. And my answer is that, honestly, I don’t believe it’s possible to come up with any reliable test for deciding which rules to enforce and which to abandon. Even if you can answer all of the questions above, there is no formula that you plug them into that will tell you what’s correct. And even though it’s sometimes said that language is the ultimate democracy, with every user casting a vote, the truth is that there isn’t really a vote either. Nobody every tallies up the numbers and declares a winner.
That is, the answer is that there is no answer.
This doesn’t stop people from trying to come up with answers, of course. The American Heritage Dictionary had its usage panel, but that was just an opinion poll of mostly older, mostly male, and mostly white scholars and writers. Some usage dictionaries have relied on corpus data to find out what actual usage is, though finding out what usage is doesn’t tell us which usage is right. Bryan Garner gives some first principles in his usage dictionary, but they’re not true first principles—they’re inconsistently applied and occasionally contradict each other, so it often feels like they’re applied after the fact to justify the desired judgment.
This is one reason why I love Merriam-Webster’s Dictionary of English Usage so much. It mostly doesn’t attempt to declare what’s right and wrong. It basically says, “Here’s how this word has been used, and here’s what people have said about it; now make up your own mind.” It embraces the relativity.
A lot of editors find that approach frustrating because they just want to know if they should leave the word or phrase in question or change it, but I find it refreshing. It doesn’t try to pretend that there are objective answers to questions of opinion. That is, when you’re asking if you should accept a usage, you’re not asking a question that can be answered with facts.
Is it good to know what people’s opinions on usage are? Absolutely. But opinions can’t tell me what I should do. They can’t tell me whether I should accept “as such” to mean “therefore” or whether I should keep editing it out at work. Ultimately, I have to decide for myself what to do.
So the next time a new “as such” came across my desk, I made a decision: I let it go.
First off, those little dots that appear in words like coöperation aren’t umlauts: they’re diaereses. But a few paragraphs into the article, they actually correct the headline with this fake quote from New Yorker editor David Remnick: “We already know some of you don’t like the dots. You probably call them umlauts. Well, you’re wrong: They’re actually called diaeresis, so try thinking twice before trying to correct us on how we use them.” But this just introduced another problem: diaeresis is the singular form. The plural form is diaereses. That is, the second o in coöperate has a diaeresis over it, but you’d say that the New Yorker uses diaereses in words with doubled vowels.
A diaeresis is a pair of dots that appear over a vowel to indicate that the vowel is pronounced separately from an adjacent vowel. For example, in English oo is generally pronounced as a single vowel sound, usually either the /u/ sound in boot or the /ʊ/ in book. The New Yorker puts a diaeresis over the repeated vowel in words like cooperate to show that those two o’s are pronounced as two distinct vowels. This also applies to other words with repeated vowels like reelect.
English doesn’t use very many diacritical marks, and the ones that it does use are almost entirely from foreign borrowings. But the diaeresis is uncommon in English even compared to other diacriticals. It mostly appears in French borrowings like naïveté (though naïve is often simplified to naive), where it serves the same purpose: showing that the two adjacent vowels are pronounced separately and not as a diphthong or a single long vowel. (In French, for example, ai is pronounced with the /ɛ/ sound in bet, so without the diaeresis, naive would be pronounced like Neve Campbell’s first name.)
The diaeresis goes all the way back to Ancient Greek, where it was also used the same way. Its first use, though, was to separate a vowel at the start of a new word from a vowel at the end of a preceding word, because Greek was originally written without any spaces between words. The word diaeresis comes from the Ancient Greek word for ‘division’, from diairein ‘to divide, separate’, from dia– ‘apart’ + hairein ‘take’. That is, it was simply a mark that divided two words or two adjacent vowels. Some later European languages saw the utility of a mark that indicated that two vowels were meant to be pronounced individually, and they adopted it.
But it has never been common in English outside of the pages of the New Yorker. In Confessions of a Comma Queen, former New Yorker copy editor Mary Norris briefly recounts the rationale behind the magazine’s style choice (excerpted here on Merriam-Webster’s website):
Basically, we have three options for these kinds of words: “cooperate,” “co-operate,” and “coöperate.” Back when the magazine was just developing its style, someone decided that the first could be misread and the second was ridiculous, and so adopted the third as the most elegant solution with the broadest application.
Norris also says that the style editor was on the verge of changing his mind on the diaereses back in 1978, but then he died, and “no one has had the nerve to raise the subject since.” Norris herself admits that “most people would not trip over the ‘coop’ in ‘cooperate’ or the ‘reel’ in ‘reelect'” and that diaereses are number one complaint from readers, but apparently they’re not going anywhere anytime soon. (I think that if you’re afraid to talk about changing your style guide—especially when readers find your style distracting and annoying—then you have either a bad style guide or a bad culture surrounding your style guide or both.)
But on to umlauts. Umlauts look just like diaereses—you could call diaereses and umlauts homoglyphic—but they’re used in a very different way and have a distinct origin. The umlaut symbol originated in German but has been borrowed into other languages, including Swedish, Hungarian, Turkish, and Finnish. But to understand what an umlaut does, you need to understand a little bit about where vowels are produced in the mouth.
The vowel /u/ (the sound in “boot”) is a high back vowel: it’s pronounced with the tongue pulled back and the mouth only slightly open so that the tongue is close to the roof of the mouth. The vowel /i/ (the sound in “beet”), on the other hand, is a high front vowel: it’s similarly pronounced with the mouth only slightly open so that the tongue is close to the roof of the mouth, but the tongue is pushed forward instead. If you alternate between saying “oo” and “ee”, you should be able to feel the difference. The vowel /i/ is pronounced a little behind your top front teeth, while /u/ is pronounced towards the soft palate, also known as the velum. (The vowel /u/ is also pronounced with the lips rounded, which has the effect of enhancing the distinction between it and /i/.) And the vowel /a/ (roughly like “ah”, though not every dialect of English has that exact vowel sound) is pronounced with the tongue in the middle or towards the front of the mouth and with the mouth wide open. The International Phonetic Alphabet considers it a low front vowel, but it’s also sometimes treated as a low central vowel.
What an umlaut symbol does, then, is indicate that a vowel is produced further forward in the mouth (and sometimes also higher in the mouth) than normal. For example, ü is pronounced in the same place as /i/, but it retains the lip rounding of /u/. Try saying /i/ or /ɪ/ (“ee” or “ih”) with your lips rounded, and voilà: you just made the sound of the German ü. An ö, by contrast, is like an /e/ or an /ɛ/ (roughly an “ay” or an “eh”) with lips rounded, while an ä is raised to an /e/ or an /ɛ/. (The vowel /a/ doesn’t have any lip rounding, so neither does the umlauted version.)
The term umlaut, which comes from a German word roughly meaning ‘sound change’, is also used in Germanic linguistics to refer to certain kinds of vowel changes, especially when a vowel moves closer to /i/. Sometimes, when a back or central vowel is followed by a front vowel, we start moving our tongue forward a little early in anticipation of that front vowel. In other words, the frontness of one vowel can spread backwards through the word to the preceding vowel.
English doesn’t use the umlaut mark, but it’s full of words that were produced by the phonological process of umlaut. Plurals like men, geese, feet, and mice were formed by umlaut. In Proto-Germanic, an ancestor of English that was spoken between about 500 BC and the first few centuries AD, the singular form of the word for ‘man’ was mann, and the plural was manniz. That /i/ vowel in the suffix eventually pulled the /a/ up and forward to /ɛ/, yielding the word men in English. At some point the suffix dropped away entirely, leaving only the changed vowel in the stem as evidence that it was there. In the case of geese, feet, and mice, the umlauted vowels also lost their rounding after they moved forward.
Umlaut also shows up in English in some less expected places. Have you ever wondered why words like busy and bury aren’t spelled like they’re pronounced? It’s because those words evolved in different ways in different Old English dialects. In some dialects, the first vowel umlauted and then lost its rounding, ultimately yielding an /ɪ/ or an /ɛ/. But in other dialects, they didn’t undergo umlaut, retaining the original /u/. At some point the two forms mashed up, and we got the spelling of one dialect and the pronunciation of another. The weird alternations in words like bring/brought and teach/taught are also the product of umlaut, with a couple other phonological changes thrown in for good measure.
So if the phonological process of umlaut is common to English, German, and other Germanic languages, why does German use the umlaut character but not English? It’s simply because the writing systems of each language developed separately after many of those sound changes had happened. For example, the modern German word schön was written schoene in Middle High German (around 1050 to 1350 AD). That final -e on the end, which has since dropped off, caused the o to become umlauted. But then, to make it clear that the o was pronounced with an umlaut, people started writing another e after the o too. Then they started writing that e above the o rather than after it to show that it was affecting the vowel but wasn’t really pronounced, and eventually this superscript e simplified to two short vertical strokes or two dots.† And thus the confusion between diaereses and umlauts was born.
So there you have it: The diaeresis is originally a Greek thing that indicates that two adjacent vowels are pronounced separately. In English, you’ll mostly see it in a few French borrowings or in the pages of the New Yorker. And the umlaut is originally a German thing, though it also represents a phonological process found in English and other languages. There aren’t a lot of German borrowings in English that use umlauts, so you mostly see it in the names of bands that are trying to look a little more metal.
Nöw yöu knöw.
* The New Yorker’s style inspired one of my favorite style-related tweets, from the inimitable Benjamin Dreyer:
† The tilde and cedilla were formed in similar ways. A tilde was originally just a superscript n, while a cedilla was a subscript z. The history of the latter is even right there in its name: a cedilla is a little ceda, an Old Spanish form of zeta.
A little while ago, one of my coworkers came to me with a conundrum. She had come across a sentence like “Ryan founded the company with his brother Scott” in something she was editing, and she couldn’t figure out if “brother” should be followed by a comma. She’d already spent quite a bit of time trying to answer the question, but she was coming up empty-handed.
The problem? She didn’t know how many brothers Ryan had.
If you’re a little baffled by the relationship between commas and how many brothers someone has, you’ve probably never heard of restrictive and nonrestrictive appositives. An appositive is a word or phrase that follows another and modifies it or provides additional information. In this case, the name “Scott” is an appositive for “brother”; it tells you more about the brother’s identity.
Sometimes an appositive provides information that you need in order to understand the sentence, but sometimes it just provides information that’s helpful but not strictly necessary. The Chicago Manual of Style gives these two examples in section 5.23 (the appositives are bolded):
Robert Burns, the poet, wrote many songs about women named Mary.
The poet Robert Burns wrote many songs about women named Mary.
In the first sentence, “the poet” simply provides extra information about Robert Burns, and it could be deleted without affecting the meaning of the sentence. But in the second, “Robert Burns” is necessary. If you cut it out, you wouldn’t know who “the poet” referred to. The former kind of appositive is often called nonrestrictive, while the latter is called restrictive. The second appositive restricts the reference of “the poet” to Robert Burns—that is, it specifies which poet we’re talking about. The first one doesn’t do that, so it’s called nonrestrictive.
The general rule, as it’s presented in The Chicago Manual of Style and elsewhere, is that if there’s more than one thing that the noun could refer to, then the appositive should be restrictive. That is, the appositive needs to specify which of the possible things we’re talking about. If there’s only one thing to which the appositive might refer, then it’s nonrestrictive.
For example, there’s been more than one poet in the history of the earth, so we need a restrictive appositive to tell us that the one in question is Robert Burns. Therefore, going back to my coworker’s problem, if Ryan has more than one brother, then his brother’s name should be restrictive to tell us which of his several brothers we’re talking about, but if he has only one brother, then it should be a nonrestrictive appositive (because there’s only one person that “his brother” could refer to, so the name is just extra information). For this reason, in his book Dreyer’s English, Benjamin Dreyer calls the comma before a nonrestrictive appositive the “only” comma. That is, a comma before “Scott” would tell you that he’s Ryan’s only brother. (Though if “Scott” appears in the middle of a sentence, as in “Ryan and his brother, Scott, founded a company”, then you would need commas on both sides of the appositive to set it off.)
The problem is that this forces editors to waste time doing genealogy work when we really should just be editing. My coworker had already spent who knows how long trying to figure out how many brothers Ryan had, but she couldn’t find anything definitive. So should she put in a comma or not?
I gave her a controversial opinion: I would leave the comma out, because it simply doesn’t matter how many brothers Ryan has. If it were relevant, why wouldn’t the writer have made it more explicit, as in “Ryan founded the company with his only brother, Scott”?
I’m not sure what my coworker ended up doing, but she didn’t seem happy with my heretical opinion on commas. Afterwards, I took to Twitter to voice my opinion that worrying about these commas is a waste of time. The ensuing discussion prompted a friend and fellow editor, Iva Cheung, to make the following cartoon, which she dedicated to me:
(Follow the link to see the mouseover text and bonus.)
It may indeed sound ridiculous, but my coworker is far from the only editor or writer to have grappled with this problem. In a New Yorker piece on the magazine’s famously assiduous fact-checking, John McPhee writes about a similar dilemma. In a book draft, he had written, “Penn’s daughter Margaret fished in the Delaware.” But was that right? He writes, “Should there be commas around Margaret or no commas around Margaret? The presence or absence of commas would, in effect, say whether Penn had one daughter or more than one. The commas—there or missing there—were not just commas; they were facts.”
But as Jan Freeman, a former copyeditor, asked in a column for the Boston Globe, “Were they important facts?” She continues, “How much time should you spend finding the answer—commas or no commas—to a question nobody’s asking?”
That is, is any reader asking how many daughters William Penn had or how many brothers Ryan had? Or, to be more specific, is anyone thinking, “I wonder if the number of brothers Ryan has is exactly equal to one or is some unspecified number greater than one”? And even if they are, are they expecting that information to be communicated via a comma or the lack thereof? I suspected that most people who aren’t editors aren’t reading as much into those commas as we think we’re putting into them, so I turned to Facebook to ask my friends and family members. The results were pretty surprising.
I provided the following sentences and asked what people thought the difference was:
Frank and his brother Steve started a company.
Frank and his brother, Steve, started a company.
Some people said that you use the first sentence if the reader doesn’t know Steve and the second one if they do. Some people said that the latter was always correct and that the former is incorrect or at least more casual. But someone else said that the first sentence looked correct and that the second looked overpunctuated. Another person said that the second sentence gives more emphasis to Frank’s brother. Someone else said that the second implied that the name of Frank’s brother was being provided for the first time and possibly that it’s his only brother, while the first implied that we already know the name of Frank’s brother. But someone else said that she’d use commas if she went into business with one of her brothers, but she’d use no commas if she went into business with her one and only husband. A couple of people said that they thought the issue had to do with whether or not the information in the appositive was needed as a qualifier—that is, whether the sentence makes sense without it. Someone else thought that you don’t need commas if the appositive is short but that you do if it’s longer. Another commenter said that the rule probably varied from one style guide to another. But a few people said they’d read no difference between the two, and one friend responded simply with this gif:
Out of more than two dozen respondents, only a few answered with the editorially sanctioned explanation: that the first implies that Frank has multiple brothers, while the second implies that he has only one. One person posted this comment: “If a writer wants to convey that Frank has one brother or more, this is an awful way of sneaking in that information. If the information is irrelevant, then I think most readers will not notice the presence or absence of a comma, or conclude anything on that basis, and that’s just fine.”
I think that there are two connected issues here: what the comma means and whether it’s important to communicate that an appositive is the only thing in its class or one of multiple things in its class. And both of them are essentially questions of pragmatics.
Most people think of meaning as something that is simply inherent in words (or punctuation marks) themselves. Put in a comma, and the sentence means one thing. Leave it out, and it means something else. But meaning is a lot messier than this. It depends a lot on what the speaker or writer intends and on how the listener or reader receives it.
In other words, there are really three aspects to meaning: the basic meaning of the utterance itself, known as the locution; the intent of the writer or speaker, known as the illocution; and the way in which the listener or reader interprets the message, known as the perlocution. That is, meaning isn’t found only in the utterance itself; it’s found in the entire exchange between writer and reader.
As I explained in a previous post, sometimes there’s a mismatch between the intended meaning and the form of the utterance itself. For example, if I ask, “Do you know what time it is?”, I’m not literally just checking to see if you have knowledge of the time. I’m asking you to tell me the time, but I’m doing it in a slightly indirect way, because sometimes that’s more polite—maybe I don’t know if you have a watch or phone handy, so I don’t want to presume. In this case, we could say that the illocution (my intent) is “Tell me the time”, even though the locution itself is literally just asking if you know the time, not asking you to tell me the time. Even though my utterance has the form of a yes-or-no question, you’d probably only answer “Yes, I know what time it is” if you were trying to be a smart alec. But people are usually pretty good at reading each other’s intent, so the perlocution—the message you receive—is “Jonathon wants me to tell him the time.”
The comma example is supposedly straightforward. If the writer or editor intends for a comma to indicate that Ryan has only one brother, and if it’s an established convention that that comma indicates that the thing that comes after it is the only thing that the preceding noun could refer to, and if the reader gleans from that comma that Ryan has only one brother, then everything works just as it’s supposed to. But if, for example, the writer intends to communicate that someone has only one spouse but they leave out the comma, then sometimes smart-alecky readers or editors ignore the writer’s obvious intent and insist on an incorrect reading based on the absence of the comma. That is, they ignore the obvious illocution and deliberately misread the text based on a convention that may not be shared by everyone. They’re essentially pretending that meaning comes only from the locution and not from the writer’s intent.
For instance, I remember one time in my basic copyediting course in college when my professor pointed out a book dedication that read something like “To my wife Mary”. She said that the lack of a comma clearly means that the author is a polygamist. I think I was the only one in the class who didn’t laugh at the joke. I just thought it was stupid, because obviously we know that the author isn’t a polygamist. First off, polygamy isn’t legal in the US, so it’s a pretty safe assumption that the author has only one wife. Second, if he had really meant to dedicate the book to one of his multiple wives, he probably would have written something like “To my third wife, Mary”. Pretending to misunderstand someone based on a rule that most readers don’t even know just makes you look like a jerk.
And, judging from the responses I got on Facebook, it appears that most readers are indeed unfamiliar with the rule. Many of them don’t know what the comma is supposed to mean or even that it’s supposed to mean something. Whether the comma has no inherent meaning or has an unclear meaning, there’s a problem with the locution itself. The “only” comma simply isn’t an established convention for most readers.
But there’s a problem with the illocution too, and here’s where the other question of pragmatics comes in to play. Conversation—even if it’s just the sort of one-way conversation that happens between a writer and a hypothetical reader—is generally guided by what linguists call the cooperative principle. And part of this principle is the idea that our contribution to the conversation will be relevant and will be communicated in an understandable manner.
As one of my commenters said, “If a writer wants to convey that Frank has one brother or more, this is an awful way of sneaking in that information.” So we end up with two pragmatic problems: editors are inserting irrelevant information into the text, but readers don’t even pick up on that information because they’re unaware of the convention or don’t anticipate what the editor is trying to communicate. Even when they try to guess the editor’s intent (because it’s almost always the editor putting in or taking out the comma, not the writer), they often guess wrong, because it’s not obvious why someone would be trying to sneak in information like “Ryan has only one brother” in this manner. In effect, the two problems cancel out, and all we’ve done is waste time and possibly annoy our writers and waste their time as well.
And because so few of our readers understand the purpose of the “only” comma, I think it falls firmly into what John McIntyre calls “dog-whistle editing“, which he defines as “attention to distinctions of usage”—or, in this case, punctuation—“that only other copy editors can hear.”
And, as Jan Freeman showed in her Boston Globe column, there’s evidence that this rule is a relatively recent invention. No wonder readers don’t know what the “only” comma means—it’s a convention that editors just made up. And, for the record, I’m not saying that the whole restrictive/nonrestrictive distinction is bunk, but I do think that the “only” comma is the result of an overly literal interpretation of that distinction. (But I’ll save the exploration of the rule’s origins for a future post.)
For now, I think that the solution, as I told my coworker, is to just stop worrying about it. It almost never matters whether someone is someone else’s only brother or daughter or friend or whether a book is someone’s only book, and it’s certainly not worth the time we spend trying to track down that information. Editing is fundamentally about helping the writer communicate with the reader, and I don’t think this rule serves that purpose. Let’s put the dog whistle away and worry about things that actually matter.
A lot of people dislike it when nouns like task and dialogue are turned into verbs, but this process has been a normal part of English for centuries. In my latest piece for Grammar Girl, I explain why we should all relax a little about verbing nouns.
Today and tomorrow only, you can get 20 percent off T-shirts and other items at the Arrant Pedantry Store. Just use the code ANYTHING20 at checkout. And remember that you can customize the design color and even put the designs on other items, including mugs and phone cases. Just hit the pencil icon below the item and then pick the product you want.
I’ve been thinking a lot about style guides lately, and I decided that what the world really needs right now is the definitive style guide alignment chart. I posted a version on Twitter the other day, but I wanted to do a slightly expanded version here. (Quotes are taken from easydamus.com.)
Lawful Good: The Chicago Manual of Style
A lawful good character “combines a commitment to oppose evil with the discipline to fight relentlessly.” And boy howdy, is Chicago relentless—the thing is over 1,100 pages! Even if you use it every day in your job as an editor, there are probably entire chapters that you’ve never looked at. But it’s there with its recommendations just in case.
Neutral Good: The MLA Handbook
“A neutral good character does the best that a good person can do.” Look, the MLA Handbook certainly tries to do what’s right, even if it can’t make up its mind sometimes. Remember when it said you should specify whether a source was print or web, as if that wasn’t obvious from context, and then it took that rule out in the next edition? Enough said.
Chaotic Good: The Buzzfeed Style Guide
“A chaotic good character acts as his conscience directs him with little regard for what others expect of him.” Buzzfeed style is guided by a strong moral compass but doesn’t feel beholden to a lot of traditional rules. It has great entries on gender, race, and disability and would probably recommend singular “they” in that last sentence. It also has entries on celebricat (a celebrity cat), dadbod, and milkshake duck, because that’s the internet for you.
Lawful Neutral: The Elements of Style
“A lawful neutral character acts as law, tradition, or a personal code directs her.” The Elements of Style, a.k.a. Strunk & White, certainly upholds a lot of laws and traditions. Are they good laws? Look, I don’t see how that’s relevant. The point is that if you follow its diktats by omitting needless words and going which hunting, your writing will supposedly be just like E. B. White’s.
True Neutral: The Wikipedia Style Guide
A true neutral character “doesn’t feel strongly one way or the other when it comes to good vs. evil or law vs. chaos.” Wikipedia doesn’t care for your edit wars. There are lots of acceptable style choices, whether you prefer American or British English. Just pick a style and stick with it.
Chaotic Neutral: Wired Style
A chaotic neutral character “avoids authority, resents restrictions, and challenges traditions.” Wired Style has a chapter called “Be Elite” and another called “Screw the Rules.” The first edition is also printed on day-glow yellow paper, because screw your eyes too. It also has a chapter called “Anticipate the Future” but probably didn’t anticipate that it would go out of print twenty years ago.
Lawful Evil: The New Yorker
A lawful evil character “plays by the rules but without mercy or compassion.” The New Yorker uses jarring diereses to prevent misreading of words that no one has trouble reading, and it doubles consonants in words like focussed because it said so, that’s why. It also unnecessarily sets off certain phrases with commas based on a hyperliteral idea of what restrictive and nonrestrictive mean. Tell me that’s not mercilessly evil.
Neutral Evil: The Associated Press Stylebook
“A neutral evil villain does whatever she can get away with.” The AP Stylebook used to say that two things couldn’t collide unless they were both in motion, and it also used to recommend against not only split infinitives but also adverbs placed in the middle of verb phrases, which is the normal place to put them. They only abandoned those rules when John McIntyre finally called them on that BS.
Chaotic Evil: Publication Manual of the American Psychological Association
A chaotic evil character is “arbitrarily violent” and “unpredictable.” Have you ever seen APA-style references? Some titles are in title case, while others are in sentence case. And, for reasons I can’t understand, volume numbers are italicized but issues numbers aren’t, even though there’s no space between them. “Arbitrarily violent” is the best description of that mess that I’ve seen.
Naturally, there will be some disagreement over the placement of some entries. I’ve also had a lot of calls to include Bluebook, with most people wanting to put it somewhere on the evil axis, while others have wanted to include The Yahoo! Style Guide, The Microsoft Manual of Style, or AMA Manual of Style. I’ve decided that I’m probably going to have to do a yearly update to add new entries or move some to more fitting spots. In the meantime, if you’ve got opinions—and I’m sure you do—feel free to chime in below.
Regular readers of this blog have probably noticed that my name has a slightly unusual spelling: it’s Jonathon rather than Jonathan. If you’ve ever been tempted to joke that my parents spelled my name wrong, please don’t. I’ve been hearing that joke for over thirty years now, and I can promise you that it wasn’t funny even the first time.
But in a way the jokers are right. I’m named after the Old Testament figure (the son of Saul and friend of David), whose name is usually rendered Jonathan in English translations of the Bible. My parents thought the -on form was the usual spelling, so that’s what they put on my birth certificate. But I happen to like the spelling of my name, and, anyway, it’s a legitimate variant. The NameVoyager on Baby Name Wizard shows that it’s been around since at least the 1940s or ’50s, though it’s never rivaled Jonathan in popularity. I’ve been asked if the unusual spelling of my name helped propel me to become an editor because I had to pay extra attention to the spelling, but I don’t think it’s true. It makes a nice story, though.
However, my name does serve as sort of a miniature editing test for those times when I’m hiring editorial interns. I’m usually pretty generous with who I invite to come take our editing test, but applicants who address their emails to Jonathan Owens never seem to do as well on it. If you’re applying to an editing job, you’d do well to make sure you spell the hiring manager’s name right.
But I’ve long since resigned myself to the fact that most people won’t spell it right without help. I don’t usually bother to spell it for people in situations where it doesn’t matter, like when someone is taking my order at a fast-food place and they just need to get it close enough that they can call out my name correctly. (Though I appreciate when they ask how to spell it anyway.)
Occasionally I’ll get it spelled right, but more often I get Jonathan or Johnathan or Johnathin or some other weird spelling that makes me wonder if the person writing it has ever seen the name before. For years the weirdest version I’d ever gotten was Jhonathen, but just a couple of months ago I got a receipt that said Jouhathine. I’m not sure that one will ever be topped.
But the one thing that I can’t stand is people automatically shortening my name to Jon. Though, in all honesty, sometimes it’s just as annoying when they ask if they can shorten it. On a couple of occasions I’ve had conversations like this:
Arby’s cashier: Can I get a name?
Arby’s cashier: Can I put John? I don’t want to butcher it.
Me, mentally: You kind of just did.
It’s annoying enough when I give my name to the cashier at Arby’s as Jonathon and they put Jon or John* on my receipt, but it really grates when I introduce myself to someone as Jonathon and they immediately call me Jon. You’d be surprised how often I’ve had exchanges that go like this:
Them: What’s your name?
Them: Jon? Nice to meet you.
Did I not enunciate well enough? Was their attention span so short that they could only manage to catch the first syllable? Do they just assume that anybody with a name as long as mine—three whole syllables!—naturally prefers a short form, even though I didn’t give them one? And then I always feel like a jerk for correcting them, even though I shouldn’t have to. (Side note: There was a lot of gratuitous backstorification in Solo: A Star Wars Story, but the part that annoyed me the most was when Han learns Chewbacca’s name and then decides to call him Chewie—without asking if he was okay with it!—because Chewbacca is just too long.)
The funny thing is that I tried to go by Jon once when I was a kid, and it didn’t go well. We had moved to Utah during the summer and were living with my grandma while we saved for a house. On the first day of second grade in my new school, my teacher asked if I preferred Jon or Jonathon. On a whim, I said Jon, so that’s what everyone called me. The only problem is that I wasn’t used to going by Jon—my family only ever called me Jonathon—so when people said my name, it always took me a second to realize that they were talking to me. But by then it was too late to do anything about it. I felt too embarrassed to announce to the class that, on second thought, I preferred Jonathon after all.
Thankfully, we moved into our own place just a few weeks into the school year, so I was able to start over at a new school, once again as Jonathon.
And that’s how I’ve remained ever since. Maybe you’re dying to point out that it looks like a misspelling to you, or you might be itching to ditch those extra syllables and just call me Jon, but please refrain. I’m happy with my name just how it is.
* You may be surprised to learn that the names Jonathan and John are unrelated. Jonathan comes from the Hebrew יְהוֹנָתָן (Yehonatan) or יוֹנָתָן (Yonatan), meaning ‘Jehovah has given’. John, on the other hand, comes from the Hebrew יוֹחָנָן (Yochanan), meaning ‘God is gracious’. But because of their similar forms, people conflate Jon and John and then start spelling Jonathan like Johnathan.
A recent discussion on Twitter about whether the line “I’m gonna have to science the shit out out of this” was in Andy Weir’s book The Martian or was only found in the movie reminded me of one of my favorite facts: science and shit are related. So let’s science the shit out of this etymology.
It all starts (as so many of these things do) with Proto-Indo-European. The root *skey meant ‘to cut, split, separate’. The extended form *skeyd became scit in Old English. The sc sequence was originally pronounced /sk/ in Old English and other Germanic languages, but it eventually became pronounced /ʃ/ (the “sh” sound) in Old English. The sh spelling came later under the influence of French scribes. But despite those minor spelling changes, the word has remained virtually unchanged in over a thousand years. You could travel back to Anglo-Saxon times, and they would understand you if you said shit.
So how did a root meaning ‘to cut, split, separate’ come to mean ‘feces’? From the notion of separating it from your body. The same metaphor is found in the Latin excrementum, which employs the unrelated root meaning ‘to sift, separate’.
This means that shit probably started out as a euphemism. Speakers of Proto-Indo-European or Proto-Germanic may have talked about needing to go separate something rather than use a more unsavory term. In English, shit was fairly neutral for a long while and apparently didn’t become taboo until around 1600, at which point it mostly disappeared from print. It isn’t found in Shakespeare’s plays or in the King James Bible.
Euphemisms often become sullied by the connotations of the thing they’re euphemizing, which leads to the need for new euphemisms, a process sometimes called the euphemism treadmill. So even if shit started life as a polite way to talk about defecation, it eventually became a rather crude one.
(By the way, the “ship high in transit” etymology is pure . . . well, you know. Kory Stamper’s excellent book Word by Word covers this and other bogus acronymic etymologies in more detail.)
In Latin, the PIE root *skey gave rise to the verb scire ‘to know, to understand’. It probably developed from ‘separate’ to ‘distinguish’ or ‘discern’ (that is, ‘tell things apart’) and then to the more general sense of ‘know’.
A noun form of the present participle of scire, scientia, originally meant the state of knowing—that is, ‘knowledge’. Scientia became science in French, which was then borrowed into English. In English it came to mean not just knowledge but the body of knowledge or the process of gaining new knowledge through the scientific method.
The Latin scire gives us a whole bunch of other words too, including conscience (from conscire ‘to know well, to be aware, to have on one’s conscience’), conscious (also from conscire), prescient (‘knowing beforehand’), and nescient (‘not knowing, ignorant’). A related form, nescius is also, surprisingly, the origin of nice, which is a great example of just how much meanings can change over time. Though it originally meant ‘ignorant’, it shifted through ‘foolish’ to ‘lascivious, wanton’ to ‘showy, ostentatious’ to ‘refined’ and then ‘well mannered’ or ‘kind’. The Oxford English Dictionary records many more obsolete senses. A different descendent of *skey yielded the Latin scandula, which later became scindula and was then borrowed into English, where it became shincle and then shingle (from the notion of splitting off a thin piece of wood).
In Ancient Greek, the root *skey yielded schism (meaning a division between people, often in a religious organization) and shizo-, as in schizophrenia (literally ‘a splitting of the mind’).
Back in English, *skey also yielded shed (meaning ‘to cast off’, as in shedding skin, but not the shed meaning a storage building). It probably also gave us sheath (from the notion of a split piece of wood in which a sword is inserted). The Online Etymology Dictionary says it also gives us shin (from the sense of ‘thin piece’, though that’s a little opaque to me). And it’s the source of the word share, from the notion of dividing what you have with someone else. It also gives us shiver (in the sense of a small chip or fragment of wood), which still appears as a dialectal word for ‘splinter’.
In Old Norse, *skey yielded skið also meaning ‘piece of wood’, which eventually gave us the word ski.
And *skey appears to be a variant of another root, *sek, meaning ‘to cut’, which gives us a whole host of other words like section and segment and saw, but I should probably cut this post off somewhere and save some things for another day.
If you’re like me and are still trying to get back into the swing of things after a nice holiday break, you might be having a little trouble focusing on work. You might even be suffering from a mild case of ergophobia, or the fear of work. So here’s some etymology to distract you.
Work comes from the Proto-Germanic *werkam, which in turn comes from the Proto-Indo-European *wérǵom, ultimately from the root *werǵ ‘to make’. In Ancient Greek, *wérǵom gave rise to ergon, which gives us energy, from the prefix en- ‘at’ + erg ‘work’ (‘at work, active’), as well as terms like ergonomics and ergative (and, yes, ergophobia). It also apparently gives us the name George, a name meaning ‘farmer’ or ‘husbandman’, which comes from ge ‘earth’ + ergon ‘work’, literally ‘earth worker’.
Forms of ergon also gave us surgery (from earlier chirurgerie, from the Greek kheir ‘hand’ + ergon ‘work’), metallurgy (‘metal work’), liturgy (‘public work’ or ‘public worship’), thaumaturge (‘wonder worker’), dramaturge (‘drama worker’), demiurge (‘public worker’, from a different root meaning ‘public’ than the one in liturgy), “argon” (from the prefix a- ‘not’ + ergon ‘work’, because argon is inert), lethargy (from leth ‘to forget’ + argos ‘not working, idle’), allergy (‘other working’), and synergy (‘working together’).
A variant of the PIE *werǵ, *worg, also produced the Ancient Greek organon, meaning ‘instrument’ or ‘tool’, which eventually made its way into English as organ (meaning the musical instrument, the body parts, and other senses). From this we also get the verb organize, which originally meant ‘to put in working order’, as well as other derived forms like organic and organism.
It also gave us orgy, which originally meant ‘secret rites’, probably from the sense of some kind of work performed for one’s gods. The Online Etymology Dictionary says: “OED says of the ancient rites that they were ‘celebrated with extravagant dancing, singing, drinking, etc.,’ which gives ‘etc.’ quite a workout.” (This root did not, however, give us the word orgasm.)
The Proto-Indo-European *wérǵom also yielded the Germanic bulwark (literally ‘bole work’ or ‘tree work’), which originally meant a defensive wall made of logs. This word was borrowed into English either from Middle Dutch or from Middle High German. It was also borrowed into French and became boulevard, with an anomalous change from /k/ to /d/ at the end. It eventually came to mean a tree-lined street and was then borrowed back into English.
And, of course, it also yields the English wright, meaning ‘worker’ or ‘maker’, and the archaic wrought, which is an old past-tense form of work and not a past-tense form of wreak as some mistakenly believe.
So that one little root from Proto-Indo-European has been pretty productive. I should probably try to be too.