Arrant Pedantry

By

Now on Visual Thesaurus: “Electrocution: A Shocking Misuse?”

I have a new post up on Visual Thesaurus about the use, misuse, and history of the word electrocute. Some usage commentators today insist that it be used only to refer to death by electric shock; that is, you can’t say you’ve been electrocuted if you lived to tell the tale. But the history, unsurprisingly, is more complicated: there have been disputes about the word since its birth.

As always, the article is for subscribers only, but a subscription costs a paltry $2.95 a month or $19.95 (and would make a great gift for the word lover in your life). Check it out.

By

Yes, Irregardless Is a Word

My last post, “12 Mistakes Nearly Everyone Who Writes about Grammar Mistakes Makes”, drew a lot of comments, some supportive and some critical. But no point drew as much ire as my claim that irregardless is a word. Some stated flatly, “Irregardless is not a word.” One ignorantly demanded, “Show me a dictionary that actually contains that word.” (I could show him several.) Still others argued that it was a double negative, that it was logically and morphologically ill-formed and thus had no meaning. One commenter said that “with the negating preface [prefix] ‘ir-’ and the negating suffix ‘-less’, it is a double negative” and that “it is not a synonym with ‘regardless’.” Another was even cleverer, saying, “The prefix ir-, meaning not, changes the meaning of the word regardless, so not only is it not a standard word, but it’s also misused in nearly all cases.” But these arguments still miss the point: irregardless is indeed a word, and it means the same thing as regardless.

In my last post I argued that there’s a clear difference between a word like irregardless and a nonword like flirgle. By any objective criterion, irregardless is a word. It has an established form and meaning, it’s used in speech and occasionally in writing, and it’s even found in reputable dictionaries, including Merriam-Webster’s Collegiate Dictionary and The Oxford English Dictionary (though it is, quite appropriately, labeled nonstandard). We can identify its part of speech (it’s an adverb) and describe how it’s used. By contrast, though, consider flirgle. You don’t know what its part of speech is or how to use it, and if I were to use it in a sentence, you wouldn’t know what it meant. This is because it’s just something I made up by stringing some sounds together. But when someone uses irregardless, you know exactly what it means, even if you want to pretend otherwise.

This is because words get their wordhood not from etymology or logic or some cultural institution granting them official status, but by convention. It doesn’t matter that nice originally meant “ignorant” or that contact was originally only a noun or that television is formed from a blend of Greek and Latin roots; what matters is how people use these words now. This makes some people uncomfortable because it sounds like anarchy, but it’s more like the ultimate democracy or free market. We all want to understand one another and be understood, so it’s in our mutual interest to communicate in ways that are understandable. Language is a self-regulating system guided by the invisible hand of its users’ desire to communicate—not that this stops people from feeling the need for overt regulation.

One commenter, the same who said, “Irregardless is not a word,” noted rather aptly, “There is absolutely no value to ‘irregardless’ except to recognize people who didn’t study.” Exactly. There is nothing wrong with its ability to communicate; it’s only the word’s metacommunication—that is, what it communicates about its user—that is problematic. To put it a different way, the problem with irregardless is entirely social: if you use it, you’ll be thought of as uneducated, even though everyone can understand you just fine.

On Google Plus, my friend Rivka said, “Accepting it as a word is the first part of the slippery slope.” This seems like a valid fear, but I believe it is misplaced. First of all, we need to be clear about what it means to accept irregardless as a word. I accept that it’s a word, but this does not mean that I find the word acceptable. I can accept that people do all kinds of things that I don’t like. But the real problem isn’t what we mean by accept; it’s what we mean by word. When people say that something isn’t a word, they aren’t really making a testable claim about the objective linguistic status of the word; they’re making a sociolinguistic evaluation of the word. They may say that it’s not a word, but they really mean that it’s a word that’s not allowed in Standard English. This is because we think of Standard English as the only legitimate form of English. We think that the standard has words and grammar, while nonstandard dialects have nonwords and broken grammar, or no grammar at all. Yes, it’s important to recognize and teach the difference between Standard English and nonstandard forms, but it’s also important to be clear about the difference between facts about the language and our feelings about the language.

But the irregardless-haters can also take heart: the word has been around for at least a century now, and although many other new words have been coined and become part of Standard English in that time, irregardless shows no signs of moving towards acceptability. Most people who write for publication are well aware of the stigma attached to it, and even if they aren’t, few copyeditors are willing to let it into print. It’s telling that of the Oxford English Dictionary’s eight citations of the word, two merely cite the word in other dictionaries, three more are mentions or citations in linguistics or literary journals, and one more appears to be using the word ironically. We talk about the word irregardless—mostly just to complain about it—far more than we actually use it.

So yes, irregardless is a word, even though it’s nonstandard. You don’t have to like it, and you certainly don’t have to use it, but you also don’t have to worry about it becoming acceptable anytime soon.

This post also appears on Huffington Post.

By

15 Percent Off Shirts

Today through November 24th, you can get 15 percent off all orders at the Arrant Pedantry Store when you use the coupon code WITHLOVE at checkout. It’s a good chance to get the word nerd in your life (or yourself) a little something for Christmas.

By

12 Mistakes Nearly Everyone Who Writes About Grammar Mistakes Makes

There are a lot of bad grammar posts in the world. These days, anyone with a blog and a bunch of pet peeves can crank out a click-bait listicle of supposed grammar errors. There’s just one problem—these articles are often full of mistakes of one sort or another themselves. Once you’ve read a few, you start noticing some patterns. Inspired by a recent post titled “Grammar Police: Twelve Mistakes Nearly Everyone Makes”, I decided to make a list of my own.

1. Confusing grammar with spelling, punctuation, and usage. Many people who write about grammar seem to think that grammar means “any sort of rule of language, especially writing”. But strictly speaking, grammar refers to the structural rules of language, namely morphology (basically the way words are formed from roots and affixes), phonology (the system of sounds in a language), and syntax (the way phrases and clauses are formed from words). Most complaints about grammar are really about punctuation, spelling (such as problems with you’re/your and other homophone confusion) or usage (which is often about semantics). This post, for instance, spends two of its twelve points on commas and a third on quotation marks.

2. Treating style choices as rules. This article says that you should always use an Oxford (or serial) comma (the comma before and or or in a list) and that quotation marks should always follow commas and periods, but the latter is true only in most American styles (linguists often put the commas and periods outside quotes, and so do many non-American styles), and the former is only true of some American styles. I may prefer serial commas, but I’m not going to insist that everyone who doesn’t use them is making a mistake. It’s simply a matter of style, and style varies from one publisher to the next.

3. Ignoring register. There’s a time and a place for following the rules, but the writers of these lists typically treat English as though it had only one register: formal writing. They ignore the fact that following the rules in the wrong setting often sounds stuffy and stilted. Formal written English is not the only legitimate form of the language, and the rules of formal written English don’t apply in all situations. Sure, it’s useful to know when to use who and whom, but it’s probably more useful to know that saying To whom did you give the book? in casual conversation will make you sound like a pompous twit.

4. Saying that a disliked word isn’t a word. You may hate irregardless (I do), but that doesn’t mean it’s not a word. If it has its own meaning and you can use it in a sentence, guess what—it’s a word. Flirgle, on the other hand, is not a word—it’s just a bunch of sounds that I strung together in word-like fashion. Irregardless and its ilk may not be appropriate for use in formal registers, and you certainly don’t have to like them, but as Stan Carey says, “‘Not a word’ is not an argument.”

5. Turning proposals into ironclad laws. This one happens more often than you think. A great many rules of grammar and usage started life as proposals that became codified as inviolable laws over the years. The popular that/which rule, which I’ve discussed at length before, began as a proposal—not “everyone gets this wrong” but “wouldn’t it be nice if we made a distinction here?” But nowadays people have forgotten that a century or so ago, this rule simply didn’t exist, and they say things like “This is one of the most common mistakes out there, and understandably so.” (Actually, no, you don’t understand why everyone gets this “wrong”, because you don’t realize that this rule is a relatively recent invention by usage commentators that some copy editors and others have decided to enforce.) It’s easy to criticize people for not following rules that you’ve made up.

6. Failing to discuss exceptions to rules. Invented usage rules often ignore the complexities of actual usage. Lists of rules such as these go a step further and often ignore the complexities of those rules. For example, even if you follow the that/which rule, you need to know that you can’t use that after a preposition or after the demonstrative pronoun that—you have to use a restrictive which. Likewise, the less/fewer rule is usually reduced to statements like “use fewer for things you can count”, which leads to ugly and unidiomatic constructions like “one fewer thing to worry about”. Affect and effect aren’t as simple as some people make them out to be, either; affect is usually a verb and effect a noun, but affect can also be a noun (with stress on the first syllable) referring to the outward manifestation of emotions, while effect can be a verb meaning to cause or to make happen. Sometimes dumbing down rules just makes them dumb.

7. Overestimating the frequency of errors. The writer of this list says that misuse of nauseous is “Undoubtedly the most common mistake I encounter.” This claim seems worth doubting to me; I can’t remember the last time I heard someone say “nauseous”. Even if you consider it a misuse, it’s got to rate pretty far down the list in terms of frequency. This is why linguists like to rely on data for testable claims—because people tend to fall prey to all kinds of cognitive biases such as the frequency illusion.

8. Believing that etymology is destiny. Words change meaning all the time—it’s just a natural and inevitable part of language. But some people get fixated on the original meanings of some words and believe that those are the only correct meanings. For example, they’ll say that you can only use decimate to mean “to destroy one in ten”. This may seem like a reasonable argument, but it quickly becomes untenable when you realize that almost every single word in the language has changed meaning at some point, and that’s just in the few thousand years in which language has been written or can be reconstructed. And sometimes a new meaning is more useful anyway (which is precisely why it displaced an old meaning). As Jan Freeman said, “We don’t especially need a term that means ‘kill one in 10.'”

9. Simply bungling the rules. If you’re going to chastise people for not following the rules, you should know those rules yourself and be able to explain them clearly. You may dislike singular they, for instance, but you should know that it’s not a case of subject-predicate disagreement, as the author of this list claims—it’s an issue of pronoun-antecedent agreement, which is not the same thing. This list says that “‘less’ is reserved for hypothetical quantities”, but this isn’t true either; it’s reserved for noncount nouns, singular count nouns, and plural count nouns that aren’t generally thought of as discrete entities. Use of less has nothing to do with being hypothetical. And this one says that punctuation always goes inside quotation marks. In most American styles, it’s only commas and periods that always go inside. Colons, semicolons, and dashes always go outside, and question marks and exclamation marks only go inside sometimes.

10. Saying that good grammar leads to good communication. Contrary to popular belief, bad grammar (even using the broad definition that includes usage, spelling, and punctuation) is not usually an impediment to communication. A sentence like Ain’t nobody got time for that is quite intelligible, even though it violates several rules of Standard English. The grammar and usage of nonstandard varieties of English are often radically different from Standard English, but different does not mean worse or less able to communicate. The biggest differences between Standard English and all its nonstandard varieties are that the former has been codified and that it is used in all registers, from casual conversation to formal writing. Many of the rules that these lists propagate are really more about signaling to the grammatical elite that you’re one of them—not that this is a bad thing, of course, but let’s not mistake it for something it’s not. In fact, claims about improving communication are often just a cover for the real purpose of these lists, which is . . .

11. Using grammar to put people down. This post sympathizes with someone who worries about being crucified by the grammar police and then says a few paragraphs later, “All hail the grammar police!” In other words, we like being able to crucify those who make mistakes. Then there are the put-downs about people’s education (“You’d think everyone learned this rule in fourth grade”) and more outright insults (“5 Grammar Mistakes that Make You Sound Like a Chimp”). After all, what’s the point in signaling that you’re one of the grammatical elite if you can’t take a few potshots at the ignorant masses?

12. Forgetting that correct usage ultimately comes from users. The disdain for the usage of common people is symptomatic of a larger problem: forgetting that correct usage ultimately comes from the people, not from editors, English teachers, or usage commentators. You’re certainly entitled to have your opinion about usage, but at some point you have to recognize that trying to fight the masses on a particular point of usage (especially if it’s a made-up rule) is like trying to fight the rising tide. Those who have invested in learning the rules naturally feel defensive of them and of the language in general, but you have no more right to the language than anyone else. You can be restrictive if you want and say that Standard English is based on the formal usage of educated writers, but any standard that is based on a set of rules that are simply invented and passed down is ultimately untenable.

And a bonus mistake:

13. Making mistakes themselves. It happens to the best of us. The act of making grammar or spelling mistakes in the course of pointing out someone else’s mistakes even has a name, Muphry’s law. This post probably has its fair share of typos. (If you spot one, feel free to point it out—politely!—in the comments.)

This post also appears on Huffington Post.

By

Book Review: Shady Characters

Shady_1e.inddI recently received a review copy of Keith Houston’s new book, Shady Characters: The Secret Life of Punctuation, Symbols, and Other Typographical Marks, based on his excellent blog of the same name. The first delightful surprise I found inside is that, in a tribute to medieval manuscripts and early printed books, the book is rubricated—the drop caps, special characters, figure numbers, and dingbats are printed in red. It’s a fitting design choice for a book that takes it readers through the history of the written word.

Each chapter covers a different punctuation mark or typographical symbol, starting with the pilcrow (also known as the paragraph mark, ¶). The first chapter ranges through the beginnings of Greek and Roman writing, the spread of Christianity in Europe, monastic manuscript copying, and the rise of modern typography. Partway through the chapter, I started to wonder where on earth it was all going, but as all the pieces came together, I realized what I treat I was in for. Houston has a knack for turning otherwise dry historical facts into a compelling narrative, picking out the thread of each character’s story while following it down all kinds of scenic side roads and intriguing back alleys.

The rest of the book follows much the same pattern, with trips through the birth of textual criticism in the Great Library of Alexandria, the lasting influence of Roman weights and measures, the invention of the printing press and the birth of typography, the invention of the novel, the standardization of keyboards and telephone keypads, and the beginnings of the internet. And in each chapter, Houston pulls together seemingly unrelated threads of history into a fascinating story of the origin of a familiar typographical or punctuation mark. As an editor and typesetter, I particularly appreciated his lucid treatment of the functions and appearances of the various kinds of hyphens and dashes, including the hated all-purpose hyphen-minus.

Through it all, Houston manages to muster an impressive amount of research (the endnotes take up nearly seventy pages) while keeping the text interesting and accessible. The only part where I got bogged down at all was the chapter on sarcasm and irony, which, unlike the other chapters, focuses on a set of marks that didn’t succeed. It covers various proposals over the years to create a mark or text style to indicate irony or sarcasm. But since it’s an account of failed punctuation marks, there’s an unavoidable sameness to each story—someone proposes a new punctuation mark, it fails to get off the ground, and it’s relegated to the dustbin of history. This isn’t to say that the stories aren’t interesting, just that I found them less compelling than the stories of the punctuation marks that survived.

One other problem is that some of the images are hard to read. I sometimes found it hard to pick out the character I was supposed to see in a faded and tattered Greek papyrus. Increasing the contrast or highlighting the character in question would have been helpful.

Those quibbles aside, it’s a delightful book, full of little gems like this: “In Gutenberg’s day the first rule of Hyphenation Club was that there are no rules.” (Gutenberg’s famous forty-two-line Bible features stacks of up to eight end-of-line hyphens, which would make modern typesetters and proofreaders hyperventilate.) Throughout the book, Houston successfully weaves together history, technology, and design in telling the stories of characters that we’ve seen countless times without giving a second thought to. I highly recommend it to all lovers of typography and the written word.

Disclosure: I received a review copy of Shady Characters from W. W. Norton.

By

Free Shipping Again

Once again I apologize for not posting anything new lately. I had a crazy summer of freelancing, job hunting, moving, and starting a new job, so I just haven’t had time to write recently. I hope to have something soon. But in the meanwhile, you can enjoy free shipping from the Arrant Pedantry Store when you buy two or more items and use the coupon code FALL2013. The code is good until September 17th.

If you haven’t checked out my store in a while, please take a look. You may have missed some of the newer designs like IPA for the Win and Stet Wars: The Editor Strikes Back. And of course, there are always perennial classics like Word Nerd and Battlestar Grammatica.

By

Solstices, Vegetables, and Official Definitions

Summer officially began just a few days ago—at least that’s what the calendar says. June 20 was the summer solstice, the day when the northern hemisphere is most inclined towards the sun and consequently receives the most daylight. By this definition, summer lasts until the autumnal equinox, in late September, when days and nights are of equal length. But by other definitions, summer starts at the beginning of June and goes through August. Other less formal definitions may put the start of summer on Memorial Day or after the end of the school year (which for my children were the same this year).

For years I wondered why summer officially began so late into June. After all, shouldn’t the solstice, as the day when we receive the most sunlight, be the middle of summer rather than the start? But even though it receives the most sunlight, it’s not the hottest, thanks to something called seasonal lag. The oceans absorb a large amount of heat and continue to release that heat for quite some time after the solstice, so the hottest day may come a month or more after the day that receives the most solar energy. Summer officially starts later than it should to compensate for this lag.

But what does this have to do with language? It’s all about definitions, and definitions are arbitrary things. Laypeople may think of June 1 as the start of summer, but June 1 is a day of absolutely no meteorological or astronomical significance. So someone decided that the solstice would be the official start of summer, even though the period from June 20/21 to September 22/23 doesn’t completely encompass the hottest days of the year (at least not in most of the United States).

Sometimes the clash between common and scientific definitions engenders endless debate. Take the well-known argument about whether tomatoes are fruit. By the common culinary definition, tomatoes are vegetables, because they are used mostly in savory or salty dishes. Botanically, though, they’re fruit, because they’re formed from a plant’s ovaries and contain seeds. But tomatoes aren’t the only culinary vegetables that are botanical fruits: cucumbers, squashes, peas, beans, avocados, eggplants, and many other things commonly thought of as vegetables are actually fruits.

The question of whether a tomato is a fruit or a vegetable may have entered popular mythology following a Supreme Court case in 1893 that answered the question of whether imported tomatoes should be taxed as vegetables. The Supreme Court ruled that the law was written with the common definition in mind, so tomatoes got taxed, and people are still arguing about it over a century later.

Sometimes these definitional clashes even lead to strong emotions. Consider how many people got upset when the International Astronomical Union decided that Pluto wasn’t really a planet. People who probably hadn’t thought about planetary astronomy since elementary school passionately proclaimed that Pluto was always their favorite planet. Even some astronomers declared, “Pluto’s dead.” But nothing actually happened to Pluto, just to our definition of planet. Astronomers had discovered several other Pluto-like objects and suspect that there may be a hundred or more such objects in the outer reaches of the solar system.

Does it really make sense to call all of these objects planets? Should we expect students to learn the names of Eris, Sedna, Quaoar, Orcus, and whatever other bodies are discovered and named? Or is it perhaps more reasonable to use some agreed-upon criteria and draw a clear line between planets and other objects? After all, that’s part of what scientists do: try to increase our understanding of the natural world by describing features of and discovering relationships among different things. Sometimes the definitions are arbitrary, but they’re arbitrary in ways that are useful to scientists.

And this is the crux of the matter: sometimes definitions that are useful to scientists aren’t that useful to laypeople, just as common definitions aren’t always useful to scientists. These definitions are used by different people for different purposes, and so they continue to exist side by side. Scientific definitions have their place, but they’re not automatically or inherently more correct than common definitions. And there’s nothing wrong with this. After all, tomatoes may be fruit, but I don’t want them in my fruit salad.

By

New Posts Elsewhere

I have a couple of new posts up elsewhere: a brief one at Copyediting discussing those dialect maps that are making the rounds and asking whether Americans really talk that differently from each other, and a longer one at Visual Thesaurus (subscription required) discussing the role of copy editors in driving restrictive relative which out of use. Stay tuned, and I’ll try to have something new up here in the next few days.

By

My Thesis

I’ve been putting this post off for a while for a couple of reasons: first, I was a little burned out and was enjoying not thinking about my thesis for a while, and second, I wasn’t sure how to tackle this post. My thesis is about eighty pages long all told, and I wasn’t sure how to reduce it to a manageable length. But enough procrastinating.

The basic idea of my thesis was to see which usage changes editors are enforcing in print and thus infer what kind of role they’re playing in standardizing (specifically codifying) usage in Standard Written English. Standard English is apparently pretty difficult to define precisely, but most discussions of it say that it’s the language of educated speakers and writers, that it’s more formal, and that it achieves greater uniformity by limiting or regulating the variation found in regional dialects. Very few writers, however, consider the role that copy editors play in defining and enforcing Standard English, and what I could find was mostly speculative or anecdotal. That’s the gap my research aimed to fill, and my hunch was that editors were not merely policing errors but were actively introducing changes to Standard English that set it apart from other forms of the language.

Some of you may remember that I solicited help with my research a couple of years ago. I had collected about two dozen manuscripts edited by student interns and then reviewed by professionals, and I wanted to increase and improve my sample size. Between the intern and volunteer edits, I had about 220,000 words of copy-edited text. Tabulating the grammar and usage changes took a very long time, and the results weren’t as impressive as I’d hoped they’d be. There were still some clear patterns, though, and I believe they confirmed my basic idea.

The most popular usage changes were standardizing the genitive form of names ending in -s (Jones’>Jones’s), which>that, towards>toward, moving only, and increasing parallelism. These changes were not only numerically the most popular, but they were edited at fairly high rates—up to 80 percent. That is, if towards appeared ten times, it was changed to toward eight times. The interesting thing about most of these is that they’re relatively recent inventions of usage writers. I’ve already written about which hunting on this blog, and I recently wrote about towards for Visual Thesaurus.

In both cases, the rule was invented not to halt language change, but to reduce variation. For example, in unedited writing, English speakers use towards and toward with roughly equal frequency; in edited writing, toward outnumbers towards 10 to 1. With editors enforcing the rule in writing, the rule quickly becomes circular—you should use toward because it’s the norm in Standard (American) English. Garner used a similarly circular defense of the that/which rule in this New York Times Room for Debate piece with Robert Lane Greene:

But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

He’s certainly correct in saying that since 1930 or so, editors have been changing restrictive which to that. But this isn’t evidence that there’s a good reason for the recommendation; it’s only evidence that editors believe there’s a good reason.

What is interesting is that usage writers frequently invoke Standard English in defense of the rules, saying that you should change towards to toward or which to that because the proscribed forms aren’t acceptable in Standard English. But if Standard English is the formal, nonregional language of educated speakers and writers, then how can we say that towards or restrictive which are nonstandard? What I realized is this: part of the problem with defining Standard English is that we’re talking about two similar but distinct things—the usage of educated speakers, and the edited usage of those speakers. But because of the very nature of copy editing, we conflate the two. Editing is supposed to be invisible, so we don’t know whether what we’re seeing is the author’s or the editor’s.

Arguments about proper usage become confused because the two sides are talking past each other using the same term. Usage writers, editors, and others see linguists as the enemies of Standard (Edited) English because they see them tearing down the rules that define it, setting it apart from educated but unedited usage, like that/which and toward/towards. Linguists, on the other hand, see these invented rules as being unnecessarily imposed on people who already use Standard English, and they question the motives of those who create and enforce the rules. In essence, Standard English arises from the usage of educated speakers and writers, while Standard Edited English adds many more regulative rules from the prescriptive tradition.

My findings have some serious implications for the use of corpora to study usage. Corpus linguistics has done much to clarify questions of what’s standard, but the results can still be misleading. With corpora, we can separate many usage myths and superstitions from actual edited usage, but we can’t separate edited usage from simple educated usage. We look at corpora of edited writing and think that we’re researching Standard English, but we’re unwittingly researching Standard Edited English.

None of this is to say that all editing is pointless, or that all usage rules are unnecessary inventions, or that there’s no such thing as error because educated speakers don’t make mistakes. But I think it’s important to differentiate between true mistakes and forms that have simply been proscribed by grammarians and editors. I don’t believe that towards and restrictive which can rightly be called errors, and I think it’s even a stretch to call them stylistically bad. I’m open to the possibility that it’s okay or even desirable to engineer some language changes, but I’m unconvinced that either of the rules proscribing these is necessary, especially when the arguments for them are so circular. At the very least, rules like this serve to signal to readers that they are reading Standard Edited English. They are a mark of attention to detail, even if the details in question are irrelevant. The fact that someone paid attention to them is perhaps what is most important.

And now, if you haven’t had enough, you can go ahead and read the whole thesis here.

By

Now Launching . . .

My wife and I are launching a freelance editing and design service, Perfect Page Editing & Design, and are looking for clients. Please take a look!