Arrant Pedantry


Historic, Historical

My brother recently asked me how to use pairs of words like historic/historical, mathematic/mathematical, and problematic/problematical. The typical usage advice is pretty straightforward—use historic to refer to important things from history and historical to refer to anything having to do with past events, important or not—but the reality of usage is a lot more complicated.

According to the Oxford English Dictionary, historic was first used as an adjective meaning “related to history; concerned with past events” in 1594. In 1610, it appeared in the sense “belonging to, constituting, or of the nature of history; in accordance with history”; sometimes this was contrasted with prehistoric. It wasn’t until 1756 that it was first used to mean “having or likely to have great historical importance or fame; having a significance due to connection with historical events.” The first edition of the OED called this “the prevailing current sense”, but the current edition notes that the other senses are still common.

The history of historical isn’t much clearer. It first appeared as an adjective meaning “belonging to, constituting, or of the nature of history; in accordance with history” (sense 2 of historic) in 1425, though there aren’t any more citations in this sense until the mid- to late 1500s, about the same time that historic began to be used in this sense. Also in 1425, it appeared in the sense of a work that is based on or depicts events from history, though this sense also didn’t appear again until the late 1500s. In the broader sense “related to history; concerned with past events”, it appeared in 1521, several decades before historichistoric appeared in this sense.

In other words, both of these words have been used in essentially all of these senses for all of their history, and they both appeared around the same time. It’s not as if one clearly came first in one sense and the other clearly came first in the other sense. There is no innate distinction between the two words, though a distinction has begun to emerge over the last century or so of use.

Other such pairs are not much clearer. The OED gives several senses for mathematical beginning in 1425, and for mathematic it simply says, “= mathematical adj. (in various senses)”, with citations beginning in 1402. Apparently they are interchangeable. Problematic/problematical seem to be interchangeable as well, though problematical is obsolete in the logic sense.

But rather than go through every single pair, I’ll just conclude with what Grammarist says on the topic:

There is no rule or consistent pattern governing the formation of adjectives ending in -ic and -ical. . . .

When you’re in doubt about which form is preferred or whether an -ic/-ical word pair has differentiated, the only way to know for sure is to check a dictionary or other reference source.


The Atlantic Is Wrong about Dog Pants

While on my Christmas vacation, I came across this article in the Atlantic on the question of what proper dog pants should look like: the image on the left, or the image on the right.


The image originally came from a Facebook page called Utopian Raspberry—Modern Oasis Machine (UR-MOM), and from there it hit Twitter and various other sites. One Twitter poll found that over 80 percent of respondents favored the two-legged version.

But Robinson Meyer at the Atlantic insisted that these people were all wrong, and he’d prove it with linguistics. This is certainly a laudable goal, but his argument quickly goes off the rails. After insisting that “words mean things”, as if that were in dispute, Meyer asserts that pants cover all your legs. Humans have two legs, so our pants have two legs. Dogs have four legs, so dog pants should have four legs. QED. What’s left to discuss?

Well, a lot. Even though it’s clear that words mean things, it’s a lot less clear what words mean and how we know what they mean. Semantics is a notoriously tricky field, and there are a lot of competing theories of semantics, each with its own set of problems. There are truth-conditional theories, conceptual theories, Platonist theories, structuralist theories, and more.

But rather than get bogged down in theoretical approaches to semantics (which, frankly, were never my strong suit), let’s take a more practical approach to answering that fundamental question What are pants? by thinking about what features make something pants or not pants. We’ll ignore the question of dog pants in particular for the moment and focus on people pants.

Meyer says that pants cover all your legs, but we could also say that pants cover both your legs or maybe just that they cover your two hind limbs, and we would still wind up at the same place—two-legged pants. It’s not obviously true that pants must cover all your legs. In fact, it’s rather strange to say that pants cover all your legs, because that phrasing seems to assume that you might have more than two. You’d only phrase the definition this way if you anticipated that it might have to cover four-legged dog pants, which is begging the question. The more parsimonious definition would say that you only have to cover both legs.

Meyer’s definition is also missing one important part: the butt. Pants don’t just cover your legs—they cover your body from the waist down. Depending on where they stop, you might call them shorts, Capris, pedal pushers, or just pants. But if they don’t cover your butt, they’re not pants—they’re hose or leggings or some such, though nowadays these usually go up to the waist too. (I’m using the term waist a little loosely, since it’s technically the point halfway between the hips and the ribs, and most pants sit somewhere around the hips.) So at least when it comes to humans, pants cover most or all of the pelvic area plus at least some of the legs.

Note that underpants don’t necessarily cover any of the legs, and some shorts cover barely any of the legs at all. So if you want a definition that covers underwear too, covering the butt is actually more crucial than covering the legs. Even if we assume for the sake of discussion that we’re not including underwear, simply covering the legs is not sufficient. And covering all legs, regardless of number, is obviously not necessary. We could even say that pants cover the lower or rear part of the body, starting at the hips and ending somewhere below the butt, with separate parts for each leg (to differentiate pants from skirts or dresses).

Now let’s move on to dog pants. As far as I know, the word waist isn’t usually applied to animals other than humans, though dogs still have hips and ribs. So applying the minimum definition of covering the pelvic region and at least part of the two hind limbs, the correct version is clearly the one on the right. Strangely, Meyer says that we already have a term for the image on the right, and it’s shorts, because shorts cover only some of your legs. But this is playing really fast and loose with the definition of some. Shorts cover some part of each leg, not all of some leg. And anyway, shorts are simply a subset of pants, so if the image on the right is shorts, then it’s also pants.

The one on the left covers not just the legs but also the entire ventral side of the torso, which pants don’t normally do. Even overalls cover only part of the front of the torso, and they don’t cover the forelimbs. The closest term we have for something like the image on the left is jumpsuit, but it’s a backless, buttless jumpsuit. The image on the left makes sense as pants only if you’re fixated on covering all legs rather than just two and don’t mind omitting necessary feature of pants while adding some unnecessary features. Not only that, it’s not a very practical garment—as Jay Hathaway at New York Magazine points out, it wouldn’t even stay up unless you have some sort of suspenders going side to side over the back.

And this points out the real flaw in Meyer’s argument. He says that humans wear two-legged pants because we have two legs, but this isn’t really true. We probably wouldn’t wear four-legged pants if we had four legs, because it doesn’t make sense to design an article of clothing like that. Consider the fact that we don’t design clothes differently for babies just because they crawl on all fours.

Pants have nothing to do with which limbs we stand on and everything to do with how we’re shaped. We wear one article of clothing to cover the top halves of our bodies and another to cover the bottom halves because it’s easy to pull one article over the top and one over the bottom. Dogs aren’t shaped that differently from us, so when we make clothes for dogs, we make them the same way. Pants just happen to cover two legs on people because our two hind limbs just happen to be legs.

Besides, the whole question is moot because dog pants already exist, and they’re of the two-legged variety. What should we call them if not pants? Insisting that they’re not pants comes as a result of getting hung up on a supposed technical definition and then clinging to that technical definition in the face of all good sense.

And consider this: if dog pants have four legs and no back or butt, what would a dog shirt look like?


How to Use Quotation Marks

In my work as a copyeditor, one of the most common style errors I see is the overuse of quotation marks. Of course quotation marks should be used to set off quotations, but some writers have a rather expansive notion of what quotation marks should be used for, sprinkling them liberally throughout a document on all kinds of words that aren’t quotations. In the editing world, these are known as scare quotes, and some days it seems like I need a machete to hack through them all.

On one such day, I decide to channel my frustration into a snarky flowchart, which I posted on Twitter. It was apparently a hit, and I thought it might be helpful to expand it into a post.


For the most part, quotation marks are pretty straightforward: they’re used to signal that the text within them is a quote. There are some gray areas, though, that cause an awful lot of consternation. Sometimes the rules vary according to what style guide you follow.

Direct Quotations

This rule is the most clear-cut: use quotation marks for direct quotations, whether the original was spoken or written. Indirect quotations or paraphrases should not be put in quotation marks.

Titles of Works

The second box (which I didn’t think to include in the chart that I posted on Twitter) asks whether you’re referring to the title of a short work. But what exactly is a short work? Here’s what The Chicago Manual of Style says:

Chicago prefers italics to set off the titles of major or freestanding works such as books, journals, movies, and paintings. This practice extends to cover the names of ships and other craft, species names, and legal cases. Quotation marks are usually reserved for the titles of subsections of larger works—including chapter and article titles and the titles of poems in a collection. Some titles—for example, of a book series or a website, under which any number of works or documents may be collected—are neither italicized nor placed in quotation marks.

The MLA and APA style guides give similar rules. So if the title of the work is part of a larger work (such as a song in an album or an article in a magazine), then it goes in quotation marks. Most other titles get italicized. However, there’s an exception in Chicago and MLA: titles of unpublished works (for example, speeches, manuscripts, or unpublished theses or dissertations) get quotation marks regardless of length. AP style, on the other hand, does not use italics—all titles are put in quotation marks. This comes from a limitation of news wire services, which could not transmit italic formatting.

Words Used as Words

This is a bit of a gray area. For words used as words—for example, “A lot of people hate the word moist”—Chicago says that you can use either italics or quotation marks, but italics are the traditional choice. However, it adds that quotation marks may be more appropriate when the word is an actual quotation or when it’s necessary to distinguish between a word and its translation or meaning. Chicago provides these examples:

The Spanish verbs ser and estar are both rendered by “to be.”
Many people say “I” even when “me” would be more correct.

Both APA and MLA prescribe italics for key terms and words used as words.

Scare Quotes

Most abuses of quotation marks fall under the broad, nebulous label of scare quotes. Many writers put terms in quotation marks to indicate that they’re nonstandard, colloquial, or slang or that the term is being used ironically or under some sort of duress. MLA allows the use of quotation marks for “a word or phrase given in someone else’s sense of in a special sense or purposefully misused” (postmodernists in particular seem to love scare quotes), but Chicago and APA discourage or limit their use.

APA says that you should use quotation marks for the first instance of a term “used as an ironic comment, as slang, or as an invented or coined expression” and leave them off thereafter. After describing their use, Chicago says that “like any such device, scare quotes lose their force and irritate readers if overused.”

But even allowing for limited use of scare quotes, I have a hard time seeing what’s ironic, slang, or special about the senses of the terms in scare quotes below. All of these came from a text I recently edited, and these examples are fairly representative of how many writers use scare quotes.

A note to “skim” a chapter
selections that don’t give a “whole picture”
additional dialogue “beyond the text.”
topics from the supplemental material are not “fair game”
a helpful “tool” for understanding

It’s hard to even make a generalization about what all these uses have in common. Some are a little colloquial (which is not the same thing as slang), some are idioms or other fixed expressions, and some are simply nonliteral. But what about “skim”? There’s nothing scare-quote-worthy about that. It’s just a normal word being used the normal way.

And even though major style guides allow for the use of scare quotes, it’s important to ask yourself if you really need them. Just because you can use them doesn’t mean you should. It’s usually clear from the context whether a word is being used ironically or in some special sense, and slang is similarly obvious. And along those lines, both MLA and Chicago say that you don’t need quotation marks when you introduce a term with the phrase so-called. (APA doesn’t say anything one way or the other.) That phrase does the work for you. Scare quotes are often thus a sort of belt-and-suspenders approach.


Scare quotes quickly shade into more emphatic uses, where the purpose is not to signal irony or special use but to simply draw attention to the word or phrase. But if you misuse scare quotes this way, not only do you risk irritating the reader, but you risk sending the wrong message altogether, as in this example spotted by Bill Walsh:

There’s an entire blog dedicated to such unintentionally ironic uses of quotation marks. They’ve even been mocked by no less than Strong Bad himself. But most importantly, if you’re writing for publication, no major style guides allow this sort of use. In short: don’t use quotation marks for emphasis.

Other Uses

Sometimes it’s really not clear what quotation marks are being used for. In this example from the “Blog” of “Unnecessary” Quotation Marks, how are the quotation marks being used? Literally? Ironically? Emphatically?

Whatever the intent may have been, it’s clear that they’re not needed here. They’re just adding visual clutter and distracting from the real message.


When it comes to uses beyond signaling direct quotations, you’ll probably want to refer to whatever style guide is appropriate in your field. But keep in mind that their other uses are limited outside of quotations and certain kinds of titles. Even though most style guides allow for some use of scare quotes, in my opinion as a writer and editor, it’s best to use them sparingly if they’re to be used at all. Keep the hand-holding to a minimum and let your words speak for themselves.


The Drunk Australian Accent Theory

Last week a story started making the rounds claiming that the Australian accent is the result of an “alcoholic slur” from heavy-drinking early settlers. Here’s the story from the Telegraph, which is where I first saw it. The story has already been debunked by David Crystal and others, but it’s still going strong.

The story was first published in the Age by Dean Frenkel, a lecturer in public speaking and communications at Victoria University. Frenkel says that the early settlers frequently got drunk together, and their drunken slur began to be passed down to the rising generations.

Frenkel also says that “the average Australian speaks to just two thirds capacity—with one third of our articulator muscles always sedentary as if lying on the couch”. As evidence, he lists these features of Australian English phonology: “Missing consonants can include missing ‘t’s (Impordant), ‘l’s (Austraya) and ‘s’s (yesh), while many of our vowels are lazily transformed into other vowels, especially ‘a’s to ‘e’s (stending) and ‘i’s (New South Wyles) and ‘i’s to ‘oi’s (noight).”

The first sentence makes it sound as if Frenkel has done extensive phonetic studies on Australians—after all, how else would you know what a person’s articulator muscles are doing?—but the claim is pretty far-fetched. One-third of the average Australian’s articulator muscles are always sedentary? Wouldn’t they be completely atrophied if they were always unused? That sounds less like an epidemic of laziness and more like a national health crisis. But the second sentence makes it clear that Frenkel doesn’t have the first clue when it comes to phonetics and phonology.

There’s no missing consonant in impordant—the [t] sound has simply been transformed into an alveolar flap, [r], which also happens in some parts of the US. This is a process of lenition, in which sounds become more vowel-like, but it doesn’t necessarily correspond to laziness or lax articulator muscles. Austraya does have a missing consonant—or rather, it has a liquid consonant, [l], that has been transformed into the semivowel [j]. This is also an example of lenition, but, again, lenition doesn’t necessarily have anything to do with the force of articulation. Yesh (I presume for yes) involves a slight change in the placement of the tip of the tongue—it moves slightly further back towards the palate—but nothing to do with the force of articulation.

The vowel changes have even less to do with laziness. As David Crystal notes in his debunking, the raising of [æ] to [ε] in standing actually requires more muscular energy to produce, not less. I assume that lowering the diphthong [eɪ] to [æɪ] in Wales would thus take a little bit less energy, but the raising and rounding of [aɪ] to [ɔɪ] would require a little more. In other words, there is no clear pattern of laziness or laxness. Frenkel simply assumes that there’s a standard for which Australians should be aiming and that anything that misses that standard is evidence of laziness, regardless of the actual effort expended.

Even if it were a matter of laziness, the claim that one-third of the articular muscles are always sedentary is absolutely preposterous. There’s no evidence that Frenkel has done any kind of research on the subject; this is just a number pulled from thin air based on his uninformed perceptions of Australian phonetics.

And, again, even if his claims about Australian vocal laxness were true, his claims about the origin of this supposed laxness are still pretty tough to swallow. The early settlers passed on a drunken slur to their children? For that to be even remotely possible, every adult in Australian would have had to be drunk literally all the time, including new mothers. If that were true, Australia would be facing a raging epidemic of fetal alcohol syndrome, not sedentary speech muscles.

As far as I know, there is absolutely zero evidence that Australian settlers were ever that drunk, that constant drunkenness can have an effect on children who aren’t drinking, or that the Australian accent has anything in common with inebriated speech.

When pressed, Frenkel attempts to weasel out of his claims, saying, “I am telling you, it is a theory.” But in his original article, he never claimed that it was a theory; he simply asserted it as fact. And strictly speaking, it isn’t even a theory—at best it’s a hypothesis, because he has clearly done no research to substantiate or verify it.

But all this ridiculousness is just a setup for his real argument, which is that Australians need more training in rhetoric. He says,

If we all received communication training, Australia would become a cleverer country. When rhetoric is presented effectively, it enables content to be communicated in a listener-friendly environment, with well-chosen words spoken at a listenable rate and with balanced volume, fluency, clarity and understandability.

Communication training could certainly be a good thing, but again, there’s a problem—this isn’t rhetoric. Rhetoric is the art of discourse and argumentation; what Frenkel is describing is more like diction or elocution. He’s deploying bad logic and terrible linguistics in service of a completely muddled argument, which is that Australians need to learn to communicate better.

In the end, what really burns me about this story isn’t that Frenkel is peddling a load of tripe but that journalists are so eager to gobble it up. Their ignorance of linguistics is disappointing, but their utter credulousness is completely dismaying. And if that weren’t bad enough, in an effort to present a balanced take on the story, journalists are still giving him credence even when literally every linguist who has commented on it has said that it’s complete garbage.

Huffington Post ran the story with the subhead “It’s a highly controversial theory among other academics”. (They also originally called Frenkel a linguist, but this has been corrected.) But calling Frenkel’s hypothesis “a highly controversial theory among other academics” is like saying that alchemy is a highly controversial theory among chemists or that the flat-earth model is a highly controversial theory among geologists. This isn’t a real controversy, at least not in any meaningful way; it’s one uninformed guy spouting off nonsense and a lot other people calling him on it.

In the end, I think it was Merriam-Webster’s Kory Stamper who had the best response:



Overanxious about Ambiguity

As my last post revealed, a lot of people are concerned—or at least pretend to be concerned—about the use of anxious to mean “eager” or “excited”. They claim that since it has multiple meanings, it’s ambiguous, and thus the disparaged “eager” sense should be avoided. But as I said in my last post, it’s not really ambiguous, and anyone who claims otherwise is simply being uncooperative.

Anxious entered the English language in the the early to mid-1600s in the sense of “troubled in mind; fearful; brooding”. But within a century, the sense had expanded to mean “earnestly desirous” or “eager”. That’s right—the allegedly new sense of the word was already in use before the United States declared independence.

These two meanings existed side by side until the early 1900s, when usage commentators first decided to be bothered by the “eager” sense. And make no mistake—this was a deliberate decision to be bothered. Merriam-Webster’s Dictionary of English Usage includes this anecdote from Alfred Ayres in 1901:

Only a few days ago, I heard a learned man, an LL.D., a dictionary-maker, an expert in English, say that he was anxious to finish the moving of his belongings from one room to another.

“No, you are not,” said I.

“Yes, I am. How do you know?”

“I know you are not.”

“Why, what do you mean?”

“There is no anxiety about it. You are simply desirous.”

Ayres’s correction has nothing to do with clarity or ambiguity. He obviously knew perfectly well what the man meant but decided to rub his nose in his supposed error instead. One can almost hear his self-satisfied smirk as he lectured a lexicographer—a learned man! a doctor of laws!—on the use of the language he was supposed to catalog.

A few years later, Ambrose Bierce also condemned this usage, saying that anxious should not be used to mean “eager” and that it should not be followed by an infinitive. As MWDEU notes, anxious is typically used to mean “eager” when it is followed by an infinitive. But it also says that it’s “an oversimplification” to say that anxious is simply being used to mean “eager”. It notes that “the word, in fact, fairly often has the notion of anxiety mingled with that of eagerness.” That is, anxious is not being used as a mere synonym of eager—it’s being used to indicate not just eagerness but a sort of nervous excitement or anticipation.

MWDEU also says that this sense is the predominant one in the Merriam-Webster citation files, but a search in COCA doesn’t quite bear this out—only about a third of the tokens are followed by to and are clearly used in the “eager” sense. Google Books Ngrams, however, shows that to is by far the most common word that immediately follows anxious; that is, people are anxious to do something far more often than they’re anxious about something.

This didn’t stop one commenter from claiming that not only is this use of anxious confusing, but she’d literally never encountered it before. It’s hard to take such a claim seriously when this use is not only common but has been common for centuries.

It’s also hard to take seriously the claim that it’s ambiguous when nobody can manage to find an example that’s actually ambiguous. A few commenters offered made-up examples that seemed designed to be maximally ambiguous when presented devoid of context. They also ignored the fact that the “eager” sense is almost always followed by an infinitive. That is, as John McIntyre pointed out, no English speaker would say “I was anxious upon hearing that my mother was coming to stay with us” or “I start a new job next week and I’m really anxious about that” if they meant that they were eager or excited.

Another commenter seemed to argue that the problem was that language was changing in an undesirable way, saying, “It’s clearly understood that language evolves, but some of us might prefer a different or better direction for that evolution. . . . Is evolution the de facto response for any misusage in language?”

But this comment has everything backwards. Evolution isn’t the response to misuse—claims of misuse are (occasionally) the response to evolution. The word anxious changed in a very natural way, losing some of its negative edge and being used in a more neutral or positive way. The same thing happened to the word care, which originally meant “to sorrow or grieve” or “to be troubled, uneasy, or anxious”, according to the Oxford English Dictionary. Yet nobody complains that everyone is misusing the word today.

That’s because nobody ever decided to be bothered by it as they did with anxious. The claims of ambiguity or undesired language change are all post hoc; the real objection to this use of anxious was simply that someone decided on the basis of etymology—and in spite of established usage—that it was wrong, and that personal peeve went viral and became established in the usage literature.

It’s remarkably easy to convince yourself that something is an error. All you have to do is hear someone say that it is, and almost immediately you’ll start noticing the error everywhere and recoiling in horror every time you encounter it. And once the idea that it’s an error has become lodged in your brain, it’s remarkably difficult to dislodge it. We come up with an endless stream of bogus arguments to rationalize our pet peeves.

So if you choose to be bothered by this use of anxious, that’s certainly your right. But don’t pretend that you’re doing the language a service.


New Shirts, Now on Sale

To make up for not posting for a few months, I’ve added a few new shirts to the Arrant Pedantry Store. Take a look!

I could care fewer

If you see a design you like but want it on a different shirt or other product, you can use the product designer here.

And through September 1, you can get 15 percent off all orders with the coupon code FAVSHIRT.


This Is Not the Grammatical Promised Land

I recently became aware of a column in the Chicago Daily Herald by the paper’s managing editor, Jim Baumann, who has taken upon himself the name Grammar Moses. In his debut column, he’s quick to point out that he’s not like the real Moses —“My tablets are not carved in stone. Grammar is a fluid thing.”

He goes on to say, “Some of the rules we learned in high school have evolved with us. For instance, I don’t know a lot of people outside of church who still employ ‘thine’ in common parlance.” (He was taught in high school to use thine in common parlance?)

But then he ends—after a rather lengthy windup—with the old shibboleth of using anxious to mean eager. He says that “generally speaking, the word you’re grasping for is ‘eager,'” ending with the admonition, “Write carefully!”

But as Merriam-Webster’s Dictionary of English Usage notes, this rule is an invention in American usage dating to the early 1900s, and anxious had been used to mean eager for 160 years before the rule proscribing this use was invented. They conclude, “Anyone who says that careful writers do not use anxious in its ‘eager’ sense has simply not examined the available evidence.”

Not a good start for a column that aims for a grammatical middle ground.

And Baumann certainly seems to think he’s aiming for the middle ground. In a later column, he says, “Grammarians fall along a spectrum. There are the fundamentalists, who hold their 50-year-old texts as close to their bosoms as one might a Bible. There are the libertines, who believe that if it feels or sounds right, use it. . . . You’ll find me somewhere in the middle.” He again insists that he’s not a grammar fundamentalist before launching into more invented rules: the supposed misuse of like to mean “such as” or “including” and feel to mean “think”.

He says, “If you listen to a car dealer’s pitch that a new SUV has features like anti-lock brakes and a deluxe stereo, do you really know what you’re getting? Nope. Because ‘like’ means similar to, but not the same.” The argument here is simple, straightforward, and completely wrong.

First, it assumes an overly narrow definition of like. Second, it pretends complete ignorance of any meaning outside of that narrow definition. If a car salesperson tells you that a new SUV has features like anti-lock brakes and a deluxe stereo, you know exactly what you’re getting. In technical terms, pretending that you don’t understand someone is called engaging in uncooperative communication. In layman’s terms, it’s called being an ass.

And yet, strangely, Baumann promotes this rule on the basis of clarity. He says that if something is clear to 9 out of 10 readers, then it’s acceptable, but if you can write something that’s clear to all your readers, then that’s even better. While it’s certainly a good idea to make sure your writing is clear to everyone, I’m also fairly certain that no one would be legitimately confused by “features like anti-lock brakes”. Merriam-Webster’s Dictionary of English Usage doesn’t have much to say on the subject, but it lists several examples and says, “In none of the examples that follow can you detect any ambiguity of meaning.” The supposed lack of clarity simply isn’t there.

Baumann ends by saying, “The lesson is: Think about whom you’re talking to and learn to appreciate his or her or their sensitivities. Then you will achieve clarity.” The problem is that we don’t really know who our readers are and what their sensitivities are. Instead we simply internalize new rules that we learn, and then we project them onto a sort of perversely idealized reader, one who is not merely bothered by such alleged misuses but is impossibly confused by them. How do we know that they’re really confused—or even just irritated—by like to mean “such as” or “including”? We don’t. We just assume that they’re out there and that it’s our job to protect them.

My advice is to try to be as informed as possible about the rules. Be curious, and be willing to question not just others’ claims about the language but also your own assumptions. Read a lot, and pay attention to how good writing works. Get a good usage dictionary and use it. And don’t follow Grammar Moses unless you like wandering in the grammatical wilderness.


You Are Not Dr. Seuss

A couple of weeks ago, Nancy Friedman tweeted a link to an article about Netflix’s forthcoming adaptation of Green Eggs and Ham. And sadly but predictably, whoever wrote the press release about the announcement felt compelled to write in Seussian verse, despite having no idea how to do so.

Here’s the official press release, and here’s the poem—and I use the term loosely—in all its terrible glory:

Issued from Netflix headquarters.
Delivered straight to all reporters.

We’d love to share some happy news
based on the rhymes of Dr. Seuss.
Green Eggs and Ham will become a show
and you’re among the first to know.

In this richly animated production,
a 13-episode introduction,
standoffish inventor (Guy, by name)
and Sam-I-Am of worldwide fame,
embark on a cross-country trip
that tests the limits of their friendship.
As they learn to try new things,
they find out what adventure brings.
Of course they also get to eat
that famous green and tasty treat!

Cindy Holland, VP of Original Content for Netflix
threw her quote into the mix:
“We think this will be a hit
Green Eggs and Ham is a perfect fit
for our growing slate of amazing stories
available exclusively in all Netflix territories.
You can stream it on a phone.
You can stream it on your own.
You can stream it on TV.
You can stream it globally.”

I have to admit that I initially didn’t make it past the beginning of the third verse, though I knew we were in trouble from the first line. The problem is that while it’s pretty easy to make a rhyme, it’s a lot harder to make lines that scan right. To scan in this sense means to show the metrical structure—the patterns of stressed and unstressed syllables in a line. If terms like iambic pentameter make your eyes glaze over, don’t worry about them for now—let’s just look at how you’d stress some of the lines. I’ll show the stresses with all caps.

ISSued from NETflix headQUARTers

In the first line we have a pattern of stress-unstress-unstress. Each group with a stressed syllable is called a foot, and we have three feet in this line. (The last foot is short one unstressed syllable, but this is fine.) The pattern for the first line is DA-da-da DA-da-da DA-da. But notice that the stress on “headquarters” has had to shift; normally you say HEADquarters, but to keep the rhythm even, you have to say headQUARTers instead. This is not terrible, but it’s not a great start.

But the second line doesn’t match up with the first:


This one starts unstressed rather than stressed and alternates stressed-unstressed for the rest of the line. Alternatively, you could read the first line with this kind of stress (ISSued FROM netFLIX headQUARTers), but that requires shifting the stress on “Netflix” too and stressing the preposition “from”, which is normally unstressed. And even then, the second line still has that extra unstressed syllable at the beginning. The pattern here is da-DA da-DA da-DA da-DA da.

The next stanza sticks more closely to the unstressed-stressed pattern, but again it requires the reader to put the stresses in unusual places. And then there’s an extra unstressed syllable in the third line (green EGGS and HAM will beCOME a SHOW—why not green EGGS and HAM will BE a SHOW?).

The third stanza is more of a wreck. Just say the first line out loud and try to figure out where the stresses are or what the pattern is:

in this RICHly ANimated proDUCtion

The worst line by far, though, has to be the beginning of the fourth stanza:

CINdy HOLland, V(EE)P(EE) of oRIginal CONtent for NETflix

In this line we have two feet with the DA-da pattern, then two back-to-back stresses, then a couple of unstressed syllables, and then a few feet with the DA-da-da pattern.

Surprisingly, though, the poem ends strong, with a metrical pattern straight out of Green Eggs and Ham itself. Compare:


WOULD you EAT them ON a PLANE?

This is obviously where they stopped trying to shoehorn in phrases like “Cindy Holland, VP of Original Content for Netflix” and started following the source material more closely.

The trouble with most people who try to imitate Seuss is that they think poetry is just about the rhyme. (And as a parent of young children, I can tell you that there are an awful lot of children’s book authors who apparently feel compelled to write in verse, despite being terrible at it.) Rhyme is an important part of verse, but rhyme isn’t worth much without the rhythm of well-written lines. Imagine trying to write a song by throwing together a bunch of notes together but not paying any attention to rhythm. It would be a disaster, and no one would want to listen to it.

This isn’t to say that you can’t play around with meter, of course, but it should be deliberate, which means that you have to understand it first. Even Dr. Seuss fudged the meter on occasion, but this was the exception and not the rule. Rhythm shouldn’t be something you accidentally stumble upon from time to time.

You don’t necessarily have to know an anapestic tetrameter from an iambic pentameter to write good verse, but you need to have a good sense of rhythm. Try marking the stresses in each line to see if there’s a consistent pattern. And if you find yourself stumbling or awkwardly stressing certain words as you read the lines aloud, then that’s a good sign that something is wrong.

Dr. Seuss is so revered because he was so good, and it’s not easy to imitate him. So if, you still can’t write good verse despite your best efforts, that’s nothing to be ashamed of. But maybe you should stick to writing press releases.


Language, Logic, and Correctness

In “Why Descriptivists Are Usage Liberals”, I said that there some logical problems with declaring something to be right or wrong based on evidence. A while back I explored this problem in a piece titled “What Makes It Right?” over on Visual Thesaurus.

The terms prescriptive and descriptive were borrowed from philosophy, where they are used to talk about ethics, and the tension between these two approaches is reflected in language debates today. The questions we have today about correct usage are essentially the same questions philosophers have been debating since the days of Socrates and Plato: what is right, and how do we know?

As I said on Visual Thesaurus, all attempts to answer these questions run into a fundamental logical problem: just because something is doesn’t mean it ought to be. Most people are uncomfortable with the idea of moral relativism and believe at some level that there must be some kind of objective truth. Unfortunately, it’s not entirely clear just where we find this truth or how objective it really is, but we at least operate under the convenient assumption that it exists.

But things get even murkier when we try to apply this same assumption to language. While we may feel safe saying that murder is wrong and would still be wrong even if a significant portion of the population committed murder, we can’t safely make similar arguments about language. Consider the word bird. In Old English, the form of English spoken from about 500 AD to about 1100 AD, the word was brid. Bird began as a dialectal variant that spread and eventually supplanted brid as the standard form by about 1600. Have we all been saying this word wrong for the last four hundred years or so? Is saying bird just as wrong as saying nuclear as nucular?

No, of course not. Even if it had been considered an error once upon a time, it’s not an error anymore. Its widespread use in Standard English has made it standard, while brid would now be considered an error (if someone were to actually use it). There is no objectively correct form of the word that exists independent of its use. That is, there is no platonic form of the language, no linguistic Good to which a grammarian-king can look for guidance in guarding the city.

This is why linguistics is at its core an empirical endeavor. Linguists concern themselves with investigating linguistic facts, not with making value judgements about what should be considered correct or incorrect. As I’ve said before, there are no first principles from which we can determine what’s right and wrong. Take, for example, the argument that you should use the nominative form of pronouns after a copula verb. Thus you should say It is I rather than It is me. But this argument assumes as prior the premise that copula verbs work this way and then deduces that anything that doesn’t work this way is wrong. Where would such a putative rule come from, and how do we know it’s valid?

Linguists often try to highlight the problems with such assumptions by pointing out, for example, that French requires an object pronoun after the copula (in French you say c’est moi [it’s me], not c’est je [it’s I]) or that English speakers, including renowned writers, have long used object forms in this position. That is, there is no reason to suppose that this rule has to exist, because there are clear counterexamples. But then, as I said before, some linguists leave the realm of strict logic and argue that if everyone says it’s me, then it must be correct.

Some people then counter by calling this argument fallacious, and strictly speaking, it is. Mededitor has called this the Jane Austen fallacy (if Jane Austen or some other notable past writer has done it, then it must be okay), and one commenter named Kevin S. has made similar arguments in the comments on Kory Stamper’s blog, Harmless Drudgery.

There, Kevin S. attacked Ms. Stamper for noting that using lay in place of lie dates at least to the days of Chaucer, that it is very common, and that it “hasn’t managed to destroy civilization yet.” These are all objective facts, yet Kevin S. must have assumed that Ms. Stamper was arguing that if it’s old and common, it must be correct. In fact, she acknowledged that it is nonstandard and didn’t try to argue that it wasn’t or shouldn’t be. But Kevin S. pointed out a few fallacies in the argument that he assumed that Ms. Stamper was making: an appeal to authority (if Chaucer did it, it must be okay), the “OED fallacy” (if it has been used that way in the past, it must be correct), and the naturalistic fallacy, which is deriving an ought from an is (lay for lie is common; therefore it ought to be acceptable).

And as much as I hate to say it, technically, Kevin S. is right. Even though he was responding to an argument that hadn’t been made, linguists and lexicographers do frequently make such arguments, and they are in fact fallacies. (I’m sure I’ve made such arguments myself.) Technically, any argument that something should be considered correct or incorrect isn’t a logical argument but a persuasive one. Again, this goes back to the basic difference between descriptivism and prescriptivism. We can make statements about the way English appears to work, but making statements about the way English should work or the way we think people should feel about it is another matter.

It’s not really clear what Kevin S.’s point was, though, because he seemed to be most bothered by Ms. Stamper’s supposed support of some sort of flabby linguistic relativism. But his own implied argument collapses in a heap of fallacies itself. Just as we can’t necessarily call something correct just because it occurred in history or because it’s widespread, we can’t necessarily call something incorrect just because someone invented a rule saying so.

I could invent a rule saying that you shouldn’t ever use the word sofa because we already have the perfectly good word couch, but you would probably roll your eyes and say that’s stupid because there’s nothing wrong with the word sofa. Yet we give heed to a whole bunch of similarly arbitrary rules invented two or three hundred years ago. Why? Technically, they’re no more valid or logically sound than my rule.

So if there really is such a thing as correctness in language, and if any argument about what should be considered correct or incorrect is technically a logical fallacy, then how can we arrive at any sort of understanding of, let alone agreement on, what’s correct?

This fundamental inability to argue logically about language is a serious problem, and it’s one that nobody has managed to solve or, in my opinion, ever will completely solve. This is why the war of the scriptivists rages on with no end in sight. We see the logical fallacies in our opponents’ arguments and the flawed assumptions underlying them, but we don’t acknowledge—or sometimes even see—the problems with our own. Even if we did, what could we do about them?

My best attempt at an answer is that both sides simply have to learn from each other. Language is a democracy, true, but, just like the American government, it is not a pure democracy. Some people—including editors, writers, English teachers, and usage commentators—have a disproportionate amount of influence. Their opinions carry more weight because people care what they think.

This may be inherently elitist, but it is not necessarily a bad thing. We naturally trust the opinions of those who know the most about a subject. If your car won’t start, you take it to a mechanic. If your tooth hurts, you go to the dentist. If your writing has problems, you ask an editor.

Granted, using lay for lie is not bad in the same sense that a dead starter motor or an abscessed tooth is bad: it’s a problem only in the sense that some judge it to be wrong. Using lay for lie is perfectly comprehensible, and it doesn’t violate some basic rule of English grammar such as word order. Furthermore, it won’t destroy the language. Just as we have pairs like lay and lie or sit and set, we used to have two words for hang, but nobody claims that we’ve lost a valuable distinction here by having one word for both transitive and intransitive uses.

Prescriptivists want you to know that people will judge you for your words (and—let’s be honest—usually they’re the ones doing the judging), and descriptivists want you to soften those judgements or even negate them by injecting them with a healthy dose of facts. That is, there are two potential fixes for the problem of using words or constructions that will cause people to judge you: stop using that word or construction, or get people to stop judging you and others for that use.

In reality, we all use both approaches, and, more importantly, we need both approaches. Even most dyed-in-the-wool prescriptivists will tell you that the rule banning split infinitives is bogus, and even most liberal descriptivists will acknowledge that if you want to be taken seriously, you need to use Standard English and avoid major errors. Problems occur when you take a completely one-sided approach, insisting either that something is an error even if almost everyone does it or that something isn’t an error even though almost everyone rejects it. In other words, good usage advice has to consider not only the facts of usage but speakers’ opinions about usage.

For instance, you can recognize that irregardless is a word, and you can even argue that there’s nothing technically wrong with it because nobody cares that the verbs bone and debone mean the same thing, but it would be irresponsible not to mention that the word is widely considered an error in educated speech and writing. Remember that words and constructions are not inherently correct or incorrect and that mere use does not necessarily make something correct; correctness is a judgement made by speakers of the language. This means that, paradoxically, something can be in widespread use even among educated speakers and can still be considered an error.

This also means that on some disputed items, there may never be anything approaching consensus. While the facts of usage may be indisputable, opinions may still be divided. Thus it’s not always easy or even possible to label something as simply correct or incorrect. Even if language is a democracy, there is no simple majority rule, no up and down vote to determine whether something is correct. Something may be only marginally acceptable or correct only in certain situations or according to certain people.

But as in a democracy, it is important for people to be informed before metaphorically casting their vote. Bryan Garner argues in his Modern American Usage that what people want in language advice is authority, and he’s certainly willing to give it to you. But I think what people really need is information. For example, you can state authoritatively that regardless of past or present usage, singular they is a grammatical error and always will be, but this is really an argument, not a statement of fact. And like all arguments, it should be supported with evidence. An argument based solely or primarily on one author’s opinion—or even on many people’s opinions—will always be a weaker argument than one that considers both facts and opinion.

This doesn’t mean that you have to accept every usage that’s supported by evidence, nor does it mean that all evidence is created equal. We’re all human, we all still have opinions, and sometimes those opinions are in defiance of facts. For example, between you and I may be common even in educated speech, but I will probably never accept it, let alone like it. But I should not pretend that my opinion is fact, that my arguments are logically foolproof, or that I have any special authority to declare it wrong. I think the linguist Thomas Pyles said it best:

Too many of us . . . would seem to believe in an ideal English language, God-given instead of shaped and molded by man, somewhere off in a sort of linguistic stratosphere—a language which nobody actually speaks or writes but toward whose ineffable standards all should aspire. Some of us, however, have in our worst moments suspected that writers of handbooks of so-called “standard English usage” really know no more about what the English language ought to be than those who use it effectively and sometimes beautifully. In truth, I long ago arrived at such a conclusion: frankly, I do not believe that anyone knows what the language ought to be. What most of the authors of handbooks do know is what they want English to be, which does not interest me in the least except as an indication of the love of some professors for absolute and final authority.[1]

In usage, as in so many other things, you have to learn to live with uncertainty.

  1. [1] “Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.


No, Online Grammar Errors Have Not Increased by 148%

Yesterday a post appeared on (home of Grammar Girl’s popular podcast) that appears to have been written by a company called Knowingly, which is promoting its Correctica grammar-checking tool. They claim that “online grammar errors have increased by 148% in nine years”. If true, it would be a pretty shocking claim, but the numbers immediately sent up some red flags.

They searched for seventeen different errors and compared the numbers from nine years ago to the numbers from today. From the description, I gather that the first set of numbers comes from a publicly available set of data that Google culled from public web pages. The data was released in 2006 and is hosted by the Linguistic Data Consortium. You can read more about the data here, but this part is the most relevant:

We processed 1,024,908,267,229 words of running text and are publishing the counts for all 1,176,470,663 five-word sequences that appear at least 40 times. There are 13,588,391 unique words, after discarding words that appear less than 200 times.

So the data is taken from over a trillion words of text, but some sequences were discarded if they didn’t appear frequently enough, and you can only search sequences up to five words long. Also note that while the data was released in 2006, it does not necessarily all come from 2006; some of it could have come from web pages that were older than that.

It sounds like the second set of numbers comes from a series of Google searches—it simply says “search result data today”. It isn’t explicitly stated, but it appears that the search terms were put in quotes to find exact strings. But we’re already comparing apples and oranges: though the first set of data came from a known sample size (just over a trillion words) and and was cleaned up a bit by having outliers thrown out, we have no idea how big the second sample size is. How many words are you effectively searching when you do a search in Google?

This is why corpora usually present not just raw numbers but normalized numbers—that is, not just an overall count, but a count per thousand words or something similar. Knowing that you have 500 instances of something in data set A and 1000 instances in data set B doesn’t mean anything unless you know how big those sets are, and in this case we don’t.

This problem is ameliorated somewhat by looking not just at the raw numbers but at the error rates. That is, they searched for both the correct and incorrect forms of each item, calculated how frequent the erroneous form was, and compared the rates from 2006 to the rates from 2015. It would still be better to compare two similar datasets, because we have no idea how different the cleaned-up Google Ngrams data is from raw Google search data, but at least this allows us to make some rough comparisons. But notice the huge differences between the “then” and “now” numbers in the table below. Obviously the 2015 data represents a much larger set. (I’ve split their table into two pieces, one for the correct terms and one for the incorrect terms, to make them fit in my column here.)

Correct Term



jugular vein



bear in mind



head over heels



chocolate mousse



egg yolk



without further ado



whet your appetite



heroin and morphine



reach across the aisle



herd mentality



weather vane



zombie horde



chili peppers



brake pedal



pique your interest



lessen the burden



bridal shower



Incorrect Term



juggler vein



bare in mind



head over heals



chocolate moose



egg yoke



without further adieu



wet your appetite



heroine and morphine



reach across the isle



heard mentality



weather vein



zombie hoard



chilly peppers



brake petal



peek your interest



lesson the burden



bridle shower



But then the Correctica team commits a really major statistical goof—they average all those percentages together to calculate an overall percentage. Here’s their data again:

Incorrect Term




juggler vein




bare in mind




head over heals




chocolate moose




egg yoke




without further adieu




wet your appetite




heroine and morphine




reach across the isle




heard mentality




weather vein




zombie hoard




chilly peppers




brake petal




peek your interest




lesson the burden




bridle shower







They simply add up all the percentages (1.2% + 1.9% + 6.6% + . . .) and divide by the numbers of percentages, 17. But this number is meaningless. Imagine that we were comparing two items: isn’t is used 9,900 times and ain’t 100 times, and regardless is used 999 times and irregardless 1 time. This means that when there’s a choice between isn’t and ain’t, ain’t is used 1% of the time (100/(9900+100)), and when there’s a choice between regardless and irregardless, irregardless is used .1% of the time (1/(999+1)). If you average 1% and .1%, you get .55%, but this isn’t the overall error rate.

But to get an overall error rate, you need to calculate the percentage from the totals. We have to take the total number of errors and the total number of opportunities to use either the correct or the incorrect form. This gives us (1+100/((9900+999)+(100+1))), or 101/11000, which works out to .92%, not .55%.

When we count up the totals and calculate the overall rates, we get an error rate of 1.88% for then (not 3.4%) and 2.38% for now (not 8.4%). That means the increase from 2006 to 2009 is not 148.2%, but a much more modest 26.64%. (By the way, I’m not sure where they got 148.2%; by my calculations, it should be 147.1%, but I could have made a mistake somewhere.) This is still a rather impressive increase in errors from 2009 to today, but the problems with the data set make it impossible to say for sure if this number is accurate or meaningful. “Heroine and morphine” occurred 45 times out of over a trillion words. Even if the error rate jumped 141.73% from 2009 to 2015, and even if the two sample sets were comparable, this would still probably amount to nothing more than statistical noise.

And even if these numbers were accurate and meaningful, there’s still the question of research design. They claim that grammar errors have increased, but all of the items are spelling errors, and most of them are rather obscure ones at that. At best, this study only tells us that these errors have increased that much, not that grammar errors in general have increased that much. If you’re setting out to study grammar errors (using grammar in the broad sense), why would you assume that these items are representative of the phenomenon in general?

So in sum, the study is completely bogus, and it’s obviously nothing more than an attempt to sell yet another grammar-checking service. Is it important to check your writing for errors? Sure. Can Correctica help you do that? I have no idea. But I do know that this study doesn’t show an epidemic of grammar errors as it claims to.

(Here’s the data if anyone’s interested.)

%d bloggers like this: