Arrant Pedantry

By

ACES Presentation: Copyediting and Corpus Linguistics

On Saturday, I presented at the twentieth annual conference of the American Copy Editors Society, held in Portland, Oregon. My topic was “Copyediting and Corpus Linguistics”, and my aim was to give editors a crash course in using corpora to research usage questions.

I was floored by the turnout—there were probably close to two hundred people in attendance. And I was surprised at how quickly the hour went by. I wish I’d skipped most of the background stuff and spent more time demonstrating how to do different kinds of queries and answering questions, but live and learn, I guess. I’ll have to remember that if I teach the topic again at future conferences.

I’m posting a slightly expanded version of my presentation below. And if you attended and you still have questions, or if you weren’t able to make it, feel free to leave a comment here. I’ll do my best to answer.

And, of course, a big thanks to ACES for hosting and to all those who attended. ACES folks are the best.

“Copyediting and Corpus Linguistics” (PDF)

By

The Taxing Etymology of Ask

A couple of months back, I learned that task arose as a variant of tax, with the /s/ and /k/ metathesized. This change apparently happened in French before the word was borrowed into English. That is, French had the word taxa, which came from Latin, and then the variant form tasca arose and evolved into a separate word with an independent meaning.

I thought this was an interesting little bit of historical linguistics, and as a side note, I mentioned on Twitter that a similar phonological change gave us the word ask, which was originally ax (or acs or ahs—spelling was not standardized back then). Beowulf and Chaucer both use ax, and we didn’t settle on ask as the standard form until the time of Shakespeare.

But when I said that “it was ‘ax’ before it was ‘ask'”, that didn’t necessarily mean that ax was the original form—history is a little more complicated than that.

The Oxford English Dictionary says that ask originally meant “to call for, call upon (a person or thing personified) to come” and that it comes from the Old English áscian, which comes from the Proto-Germanic *aiskôjan. But most of the earliest recorded instances, like this one from Beowulf, are of the ax form:

syþðan hé for wlenco wéan áhsode

(after he sought misery from pride)

(A note on Old English orthography: spelling was not exactly standardized, but it was still fairly predictable and mostly phonetic, even though it didn’t follow the same conventions we follow today. In Old English, the letter h represented either the sound /h/ at the beginning of words or the sound /x/ [like the final consonant in the Scottish loch] in the middle of or at the end of words. And when followed by s, as in áhsode, it made the k sound, so hs was pronounced like modern-day x, or /ks/. But the /ks/ cluster could also be represented by cs or x. For simplicity’s sake, I’m going to use ask and ax rather than asc or ahs or whatever other variant spellings have been used over the years.)

We know that ask must have been the original form because that’s what we find in cognate languages like Old Saxon, Old Frisian, and Old High German. This means that at some point after Old English became differentiated from those other languages (around 500 AD), the /s/ and /k/ metathesized and produced ax.

Almost all of the OED’s citations from Old English (which lasted to about 1100 AD) use the ax form, as in this translation of Mark 12:34 from the West Saxon Gospels: “Hine ne dorste nan mann ahsian” (no man durst ask him). (As a bonus, this sentence also has a great double negative: it literally says “no man durst not ask him”.) Only a few of the citations from the Old English period are of the ask variety. I’ll discuss this variation between ask and ax later on.

The ax forms continued through Middle English (about 1100 to 1475 AD) and into Early Modern English. Chaucer’s Canterbury Tales (about 1386 AD) has ax: “I axe, why the fyfte man Was nought housbond to the Samaritan?” In Middle English, ask starts to become a little more common in written work, and we also occasionally see ash, though this form peters out by about 1500. (Again, I’ll discuss this variant more below.)

William Tyndale’s Bible, which was the first Early Modern English translation of the Bible, has ax: Matthew 7:7 reads, “Axe and it shalbe geven you.” The Coverdale Bible, published in 1535 and based on Tyndale’s work, also has ax, but the King James Bible, published in 1611, has the now-standard ask. So do Shakespeare’s plays (dating from the late 1500s to the early 1600s). After about 1600, ax forms become scarce, though one citation from 1803 records axe as a dialectal form used in London. And it’s in nonstandard dialects where ax survives today, especially in Southern US English and African American English. (I assume it also survives in other places besides the US, but I don’t know enough about its use or distribution in other countries.)

In a nutshell, ax arose as a metathesized form of ask at some point in the Old English period, and it was the dominant form in written Old English and an acceptable variant down to the 1500s, when it started to be supplanted by the resurgent ask. And at some point, ash also appeared, though it quietly disappeared a few centuries later. So why did ask disappear for so long? And why did it come back?

The simple answer to the first question is that the word metathesized in the dominant dialect of Old English, which was West Saxon. (Modern Standard English descends not from West Saxon but from the dialect around London.) These sorts of changes just happen sometimes. In West Saxon, /sk/ often became /ks/ in the middle or at the end of a word. Sound changes are usually regular—that is, they affect all words with a particular sound or set of sounds—but this particular change apparently wasn’t; metathesized and unmetathesized forms continued to exist side by side, and sometimes there’s variation even within a manuscript. King Alfred the Great’s translation of Boethius’s Consolation of Philosophy, switches freely between the two: “Þæt is þæt ic þé ær ymb acsade. . . . Swa is ðisse spræce ðe ðu me æfter ascast.” This is pretty weird. When a change is beginning to happen, there may be some variation among words or among speakers, but variation between different forms of a word used by the same speaker is highly unusual.

As for the second question, it’s not entirely clear how or why ask came back. At first glance, it would seem that ask must have survived in other dialects and started to crop back up in written works during the Middle English Period. Or perhaps ax simply remetathesized and became ask again. But it can’t be quite that simple, because /sk/ regularly palatalized to /ʃ/ (the “sh” sound) during the Old English period. You can see the effects of this change in cognate pairs like shirt (from Old English) and skirt (from Old Norse) or ship (from Old English) and skipper (from Middle Dutch).

It’s not entirely clear when this palatalization of /sk/ to /ʃ/ happened, but it must have been sometime after the Angles and Saxons left mainland Europe (starting in the 400s or 500s) but before the Viking invasions beginning in the 800s, because Old Norse words borrowed into English retain /sk/ where English words did not. If palatalization had occurred after the influx of words from Old Norse, we’d say shy and shill instead of sky and skill.

One thing that makes it hard to pin down the date of this change is that /sk/ was originally spelled sc, and the sc spelling continued to be used even after palatalization must have happened. That means that words like ship and fish were spelled like scip and fisc. Thus a form with sc is ambiguous—we don’t know for certain if it was pronounced /sk/ or /ʃ/, though we can infer from other evidence that by the time most Old English documents were being created, sc represented /ʃ/. (Interestingly, this means that in the quote from Alfred the Great, the two forms would have been pronounced ax-ade and ash-ast.) It wasn’t until Middle English that scribes began using spellings like sch, ssh, or sh to distinguish /ʃ/ from the /sk/ combination.

If ask had simply survived in some dialect of Old English without metathesizing, it should have undergone palatalization and resulted in the modern-day form ash. As I said above, we do occasionally see ash in Middle English, which means that this did happen in some dialects of Old English. But this was never even the dominant form—it just pops up every now and then in the South West and West Midlands regions of England from the 1200s down to about 1500, when it finally dies out.

One other option is that the original ask metathesized to ax, missed out on palatalization, and then somehow metathesized back to ask. There may be some evidence for this option, because some other words seem to have followed the same route. For instance, words like flask and tusk appear in Old English as both flasce/flaxe and tusc/tux. But flask didn’t survive Old English—the original word was lost, and it was reborrowed from Romance languages in the 1500s—so we don’t know for sure if it was pronounced with /sk/ or /ʃ/ or both. Tusk appears in some dialects as tush, so we have the same three-way /sk/–/ks/–/ʃ/ alternation as ask.

But while ash meaning the powdery residue shows the same three-way variation, ash meaning the kind of tree does not—it’s always /ʃ/. Ask, ash, and ash all would have had /sk/ in the early stages of Old English, so why did one of them simply palatalize while the other two showed a three-way variation before settling on different forms? If it was a case of remetathesis that turned /ks/ back into /sk/, then why weren’t other words that originally ended in /ks/ affected by this second round of metathesis? And if /ks/ had turned back into /sk/ at some point, then why didn’t ax ‘a tool for chopping’ thus become ask? Honestly, I have no idea.

If those changes happened in that order, then we should expect to see /ask/ for the questioning word, the tree, and the tool. But there’s no way to reorder these rules to get the proper outputs for all three. Putting palatalization before metathesis gets us the proper output for the tree but also gives us ash for the questioning word, and putting a second round of metathesis at the end gets us the proper output for the questioning word but gives us ask for the chopping tool. And any way you rearrange them, you should never see multiple outputs for the same word, all apparently the products of different rules or at least different rule ordering, used in the same dialects or even by the same speakers.

So how do we explain this?

¯\_(ツ)_/¯

Maybe the sound changes happened in different orders in different parts of England, and those different dialects then borrowed forms from each other. Maybe some forms were borrowed from or influenced by the Vikings. Maybe there were several other intermediate rules that I’m missing, and those rules interacted in some strange ways. At any rate, the pronunciation ax for ask had a long and noble tradition before falling by the wayside as a dialectal form about four hundred years ago. But who knows—there’s always a chance it could become standard again in the future.

Sources

Hayes, Bruce, Robert Kirchner, and Donca Steriade, eds., Phonetically Based Phonology (New York: Cambridge University Press, 2004), 138–139.
Lass, Roger, Old English: A Historical Linguistic Companion (New York: Cambridge University Press, 1994), 58–59.
Ringe, Don, and Ann Taylor, The Development of Old English (Oxford: Oxford University Press, 2014), 203–207.

By

A Rule Worth Giving Up On

A few weeks ago, the official Twitter account for the forthcoming movie Deadpool tweeted, “A love for which is worth killing.” Name developer Nancy Friedman commented, “There are some slogans up with which I will not put.” Obviously, with a name like Arrant Pedantry, I couldn’t let that slogan pass by without comment.

The slogan is obviously attempting to follow the old rule against stranding prepositions. Prepositions usually come before their complements, but there are several constructions in English in which they’re commonly stranded, or left at the end without their complements. Preposition stranding is especially common in speech and informal writing, whereas preposition fronting (or keeping the preposition with its complement) is more typical of a very formal style. For example, you’d probably say Who did you give it to? when talking to a friend, but in a very formal situation, you might move that preposition up to the front: To whom did you give it?

This rule has been criticized and debunked countless times, but even if you believe firmly in it, you should recognize that there are some constructions where you can’t follow it. That is, following the rule sometimes produces sentences that are stylistically bad if not flat-out ungrammatical. The following constructions all require preposition stranding:

  1. Relative clauses introduced by that. The relative pronoun that cannot come after a preposition, which is one reason why some linguists argue that it’s really a conjunction (a form of the complementizer that) and not a true pronoun. You can’t say There aren’t any of that I know—you have to use which instead or leave the preposition at the end—There aren’t any that I know of.
  2. Relative clauses introduced with an omitted relative. As with the above example, the preposition in There aren’t any I know of can’t be fronted. There isn’t even anything to put it in front of, because the relative pronoun is gone. This should probably be considered a subset of the first item, because the most straightforward analysis is that relative that is omissible while other relatives aren’t. (This is another reason why some consider it not a true pronoun but rather a form of the complementizer thatthat is often omissible.)
  3. The fused relative construction. When you use what, whatever, or whoever as a relative pronoun, as in the U2 song “I Still Haven’t Found What I’m Looking For”, the preposition must come at the end. Strangely, Reader’s Digest once declared that the correct version would be “I Still Haven’t Found for What I’m Looking”. But this is ungrammatical, because “what” cannot serve as the object of “for”. For the fronted version to work, you have to reword it to break up the fused relative: “I Still Haven’t Found That for Which I’m Looking”.
  4. A subordinate interrogative clause functioning as the complement of a preposition. The Cambridge Grammar of the English Language gives the example We can’t agree on which grant we should apply for. The fronted form We can’t agree on for which grant we should apply sounds stilted and awkward at best.
  5. Passive clauses where the subject has been promoted from an object of a preposition. In Her apartment was broken into, there’s no way to reword the sentence to avoid the stranded preposition, because there’s nothing to put the preposition in front of. The only option is to turn it back into an active clause: Someone broke into her apartment.
  6. Hollow non-finite clauses. A non-finite clause is one that uses an infinitive or participial form rather than a tensed verb, so it has no overt subject. A hollow non-finite clause is also missing some other element that can be recovered from context. In That book is too valuable to part with, for example, the hollow non-finite clause is to part with. With is missing a complement, which makes it hollow, though we can recover its complement from context: that book. Sometimes you can flip a hollow non-finite clause around and insert the dummy subject it to put the complement back in its place. It’s too valuable to part with that book doesn’t really work, though It’s worth killing for a love is at least grammatical. It’s worth killing for this love is better, but in this case A love worth killing for is still stylistically preferable. But the important thing to note is that since the complement of the preposition is missing, there’s nowhere to move the preposition to. It has to remain stranded.

And that’s where the Deadpool tweet goes off the rails. Rather than leave the preposition stranded, they invent a place for it by inserting the completely unnecessary relative pronoun which. But A love for which worth killing sounds like caveman talk, so they stuck in the similarly unnecessary is: A love for which is worth killing. They’ve turned the non-finite clause into a finite one, but now it’s missing a subject. They could have fixed that by inserting a dummy it, as in A love for which it is worth killing, but they didn’t. The result is a completely ungrammatical mess, but one that sounds just sophisticated enough, thanks to its convoluted syntax, that it might fool some people into thinking it’s some sort of highfalutin form. It’s not.

Instead, it’s some sort of hideous freak, the product of an experiment conducted by people who didn’t fully understand what they were doing, just like Deadpool himself. Unlike Deadpool, though, this sentence doesn’t have any superhuman healing powers. If you ever find yourself writing something like this, do the merciful thing and put it out of its misery.

By

Historic, Historical

My brother recently asked me how to use pairs of words like historic/historical, mathematic/mathematical, and problematic/problematical. The typical usage advice is pretty straightforward—use historic to refer to important things from history and historical to refer to anything having to do with past events, important or not—but the reality of usage is a lot more complicated.

According to the Oxford English Dictionary, historic was first used as an adjective meaning “related to history; concerned with past events” in 1594. In 1610, it appeared in the sense “belonging to, constituting, or of the nature of history; in accordance with history”; sometimes this was contrasted with prehistoric. It wasn’t until 1756 that it was first used to mean “having or likely to have great historical importance or fame; having a significance due to connection with historical events.” The first edition of the OED called this “the prevailing current sense”, but the current edition notes that the other senses are still common.

The history of historical isn’t much clearer. It first appeared as an adjective meaning “belonging to, constituting, or of the nature of history; in accordance with history” (sense 2 of historic) in 1425, though there aren’t any more citations in this sense until the mid- to late 1500s, about the same time that historic began to be used in this sense. Also in 1425, it appeared in the sense of a work that is based on or depicts events from history, though this sense also didn’t appear again until the late 1500s. In the broader sense “related to history; concerned with past events”, it appeared in 1521, several decades before historichistoric appeared in this sense.

In other words, both of these words have been used in essentially all of these senses for all of their history, and they both appeared around the same time. It’s not as if one clearly came first in one sense and the other clearly came first in the other sense. There is no innate distinction between the two words, though a distinction has begun to emerge over the last century or so of use.

Other such pairs are not much clearer. The OED gives several senses for mathematical beginning in 1425, and for mathematic it simply says, “= mathematical adj. (in various senses)”, with citations beginning in 1402. Apparently they are interchangeable. Problematic/problematical seem to be interchangeable as well, though problematical is obsolete in the logic sense.

But rather than go through every single pair, I’ll just conclude with what Grammarist says on the topic:

There is no rule or consistent pattern governing the formation of adjectives ending in -ic and -ical. . . .

When you’re in doubt about which form is preferred or whether an -ic/-ical word pair has differentiated, the only way to know for sure is to check a dictionary or other reference source.

By

The Atlantic Is Wrong about Dog Pants

While on my Christmas vacation, I came across this article in the Atlantic on the question of what proper dog pants should look like: the image on the left, or the image on the right.

dogpants

The image originally came from a Facebook page called Utopian Raspberry—Modern Oasis Machine (UR-MOM), and from there it hit Twitter and various other sites. One Twitter poll found that over 80 percent of respondents favored the two-legged version.

But Robinson Meyer at the Atlantic insisted that these people were all wrong, and he’d prove it with linguistics. This is certainly a laudable goal, but his argument quickly goes off the rails. After insisting that “words mean things”, as if that were in dispute, Meyer asserts that pants cover all your legs. Humans have two legs, so our pants have two legs. Dogs have four legs, so dog pants should have four legs. QED. What’s left to discuss?

Well, a lot. Even though it’s clear that words mean things, it’s a lot less clear what words mean and how we know what they mean. Semantics is a notoriously tricky field, and there are a lot of competing theories of semantics, each with its own set of problems. There are truth-conditional theories, conceptual theories, Platonist theories, structuralist theories, and more.

But rather than get bogged down in theoretical approaches to semantics (which, frankly, were never my strong suit), let’s take a more practical approach to answering that fundamental question What are pants? by thinking about what features make something pants or not pants. We’ll ignore the question of dog pants in particular for the moment and focus on people pants.

Meyer says that pants cover all your legs, but we could also say that pants cover both your legs or maybe just that they cover your two hind limbs, and we would still wind up at the same place—two-legged pants. It’s not obviously true that pants must cover all your legs. In fact, it’s rather strange to say that pants cover all your legs, because that phrasing seems to assume that you might have more than two. You’d only phrase the definition this way if you anticipated that it might have to cover four-legged dog pants, which is begging the question. The more parsimonious definition would say that you only have to cover both legs.

Meyer’s definition is also missing one important part: the butt. Pants don’t just cover your legs—they cover your body from the waist down. Depending on where they stop, you might call them shorts, Capris, pedal pushers, or just pants. But if they don’t cover your butt, they’re not pants—they’re hose or leggings or some such, though nowadays these usually go up to the waist too. (I’m using the term waist a little loosely, since it’s technically the point halfway between the hips and the ribs, and most pants sit somewhere around the hips.) So at least when it comes to humans, pants cover most or all of the pelvic area plus at least some of the legs.

Note that underpants don’t necessarily cover any of the legs, and some shorts cover barely any of the legs at all. So if you want a definition that covers underwear too, covering the butt is actually more crucial than covering the legs. Even if we assume for the sake of discussion that we’re not including underwear, simply covering the legs is not sufficient. And covering all legs, regardless of number, is obviously not necessary. We could even say that pants cover the lower or rear part of the body, starting at the hips and ending somewhere below the butt, with separate parts for each leg (to differentiate pants from skirts or dresses).

Now let’s move on to dog pants. As far as I know, the word waist isn’t usually applied to animals other than humans, though dogs still have hips and ribs. So applying the minimum definition of covering the pelvic region and at least part of the two hind limbs, the correct version is clearly the one on the right. Strangely, Meyer says that we already have a term for the image on the right, and it’s shorts, because shorts cover only some of your legs. But this is playing really fast and loose with the definition of some. Shorts cover some part of each leg, not all of some leg. And anyway, shorts are simply a subset of pants, so if the image on the right is shorts, then it’s also pants.

The one on the left covers not just the legs but also the entire ventral side of the torso, which pants don’t normally do. Even overalls cover only part of the front of the torso, and they don’t cover the forelimbs. The closest term we have for something like the image on the left is jumpsuit, but it’s a backless, buttless jumpsuit. The image on the left makes sense as pants only if you’re fixated on covering all legs rather than just two and don’t mind omitting necessary feature of pants while adding some unnecessary features. Not only that, it’s not a very practical garment—as Jay Hathaway at New York Magazine points out, it wouldn’t even stay up unless you have some sort of suspenders going side to side over the back.

And this points out the real flaw in Meyer’s argument. He says that humans wear two-legged pants because we have two legs, but this isn’t really true. We probably wouldn’t wear four-legged pants if we had four legs, because it doesn’t make sense to design an article of clothing like that. Consider the fact that we don’t design clothes differently for babies just because they crawl on all fours.

Pants have nothing to do with which limbs we stand on and everything to do with how we’re shaped. We wear one article of clothing to cover the top halves of our bodies and another to cover the bottom halves because it’s easy to pull one article over the top and one over the bottom. Dogs aren’t shaped that differently from us, so when we make clothes for dogs, we make them the same way. Pants just happen to cover two legs on people because our two hind limbs just happen to be legs.

Besides, the whole question is moot because dog pants already exist, and they’re of the two-legged variety. What should we call them if not pants? Insisting that they’re not pants comes as a result of getting hung up on a supposed technical definition and then clinging to that technical definition in the face of all good sense.

And consider this: if dog pants have four legs and no back or butt, what would a dog shirt look like?

By

How to Use Quotation Marks

In my work as a copyeditor, one of the most common style errors I see is the overuse of quotation marks. Of course quotation marks should be used to set off quotations, but some writers have a rather expansive notion of what quotation marks should be used for, sprinkling them liberally throughout a document on all kinds of words that aren’t quotations. In the editing world, these are known as scare quotes, and some days it seems like I need a machete to hack through them all.

On one such day, I decide to channel my frustration into a snarky flowchart, which I posted on Twitter. It was apparently a hit, and I thought it might be helpful to expand it into a post.

quotesflowchart

For the most part, quotation marks are pretty straightforward: they’re used to signal that the text within them is a quote. There are some gray areas, though, that cause an awful lot of consternation. Sometimes the rules vary according to what style guide you follow.

Direct Quotations

This rule is the most clear-cut: use quotation marks for direct quotations, whether the original was spoken or written. Indirect quotations or paraphrases should not be put in quotation marks.

Titles of Works

The second box (which I didn’t think to include in the chart that I posted on Twitter) asks whether you’re referring to the title of a short work. But what exactly is a short work? Here’s what The Chicago Manual of Style says:

Chicago prefers italics to set off the titles of major or freestanding works such as books, journals, movies, and paintings. This practice extends to cover the names of ships and other craft, species names, and legal cases. Quotation marks are usually reserved for the titles of subsections of larger works—including chapter and article titles and the titles of poems in a collection. Some titles—for example, of a book series or a website, under which any number of works or documents may be collected—are neither italicized nor placed in quotation marks.

The MLA and APA style guides give similar rules. So if the title of the work is part of a larger work (such as a song in an album or an article in a magazine), then it goes in quotation marks. Most other titles get italicized. However, there’s an exception in Chicago and MLA: titles of unpublished works (for example, speeches, manuscripts, or unpublished theses or dissertations) get quotation marks regardless of length. AP style, on the other hand, does not use italics—all titles are put in quotation marks. This comes from a limitation of news wire services, which could not transmit italic formatting.

Words Used as Words

This is a bit of a gray area. For words used as words—for example, “A lot of people hate the word moist”—Chicago says that you can use either italics or quotation marks, but italics are the traditional choice. However, it adds that quotation marks may be more appropriate when the word is an actual quotation or when it’s necessary to distinguish between a word and its translation or meaning. Chicago provides these examples:

The Spanish verbs ser and estar are both rendered by “to be.”
Many people say “I” even when “me” would be more correct.

Both APA and MLA prescribe italics for key terms and words used as words.

Scare Quotes

Most abuses of quotation marks fall under the broad, nebulous label of scare quotes. Many writers put terms in quotation marks to indicate that they’re nonstandard, colloquial, or slang or that the term is being used ironically or under some sort of duress. MLA allows the use of quotation marks for “a word or phrase given in someone else’s sense of in a special sense or purposefully misused” (postmodernists in particular seem to love scare quotes), but Chicago and APA discourage or limit their use.

APA says that you should use quotation marks for the first instance of a term “used as an ironic comment, as slang, or as an invented or coined expression” and leave them off thereafter. After describing their use, Chicago says that “like any such device, scare quotes lose their force and irritate readers if overused.”

But even allowing for limited use of scare quotes, I have a hard time seeing what’s ironic, slang, or special about the senses of the terms in scare quotes below. All of these came from a text I recently edited, and these examples are fairly representative of how many writers use scare quotes.

A note to “skim” a chapter
selections that don’t give a “whole picture”
additional dialogue “beyond the text.”
topics from the supplemental material are not “fair game”
a helpful “tool” for understanding

It’s hard to even make a generalization about what all these uses have in common. Some are a little colloquial (which is not the same thing as slang), some are idioms or other fixed expressions, and some are simply nonliteral. But what about “skim”? There’s nothing scare-quote-worthy about that. It’s just a normal word being used the normal way.

And even though major style guides allow for the use of scare quotes, it’s important to ask yourself if you really need them. Just because you can use them doesn’t mean you should. It’s usually clear from the context whether a word is being used ironically or in some special sense, and slang is similarly obvious. And along those lines, both MLA and Chicago say that you don’t need quotation marks when you introduce a term with the phrase so-called. (APA doesn’t say anything one way or the other.) That phrase does the work for you. Scare quotes are often thus a sort of belt-and-suspenders approach.

Emphasis

Scare quotes quickly shade into more emphatic uses, where the purpose is not to signal irony or special use but to simply draw attention to the word or phrase. But if you misuse scare quotes this way, not only do you risk irritating the reader, but you risk sending the wrong message altogether, as in this example spotted by Bill Walsh:

There’s an entire blog dedicated to such unintentionally ironic uses of quotation marks. They’ve even been mocked by no less than Strong Bad himself. But most importantly, if you’re writing for publication, no major style guides allow this sort of use. In short: don’t use quotation marks for emphasis.

Other Uses

Sometimes it’s really not clear what quotation marks are being used for. In this example from the “Blog” of “Unnecessary” Quotation Marks, how are the quotation marks being used? Literally? Ironically? Emphatically?

Whatever the intent may have been, it’s clear that they’re not needed here. They’re just adding visual clutter and distracting from the real message.

Conclusion

When it comes to uses beyond signaling direct quotations, you’ll probably want to refer to whatever style guide is appropriate in your field. But keep in mind that their other uses are limited outside of quotations and certain kinds of titles. Even though most style guides allow for some use of scare quotes, in my opinion as a writer and editor, it’s best to use them sparingly if they’re to be used at all. Keep the hand-holding to a minimum and let your words speak for themselves.

By

The Drunk Australian Accent Theory

Last week a story started making the rounds claiming that the Australian accent is the result of an “alcoholic slur” from heavy-drinking early settlers. Here’s the story from the Telegraph, which is where I first saw it. The story has already been debunked by David Crystal and others, but it’s still going strong.

The story was first published in the Age by Dean Frenkel, a lecturer in public speaking and communications at Victoria University. Frenkel says that the early settlers frequently got drunk together, and their drunken slur began to be passed down to the rising generations.

Frenkel also says that “the average Australian speaks to just two thirds capacity—with one third of our articulator muscles always sedentary as if lying on the couch”. As evidence, he lists these features of Australian English phonology: “Missing consonants can include missing ‘t’s (Impordant), ‘l’s (Austraya) and ‘s’s (yesh), while many of our vowels are lazily transformed into other vowels, especially ‘a’s to ‘e’s (stending) and ‘i’s (New South Wyles) and ‘i’s to ‘oi’s (noight).”

The first sentence makes it sound as if Frenkel has done extensive phonetic studies on Australians—after all, how else would you know what a person’s articulator muscles are doing?—but the claim is pretty far-fetched. One-third of the average Australian’s articulator muscles are always sedentary? Wouldn’t they be completely atrophied if they were always unused? That sounds less like an epidemic of laziness and more like a national health crisis. But the second sentence makes it clear that Frenkel doesn’t have the first clue when it comes to phonetics and phonology.

There’s no missing consonant in impordant—the [t] sound has simply been transformed into an alveolar flap, [r], which also happens in some parts of the US. This is a process of lenition, in which sounds become more vowel-like, but it doesn’t necessarily correspond to laziness or lax articulator muscles. Austraya does have a missing consonant—or rather, it has a liquid consonant, [l], that has been transformed into the semivowel [j]. This is also an example of lenition, but, again, lenition doesn’t necessarily have anything to do with the force of articulation. Yesh (I presume for yes) involves a slight change in the placement of the tip of the tongue—it moves slightly further back towards the palate—but nothing to do with the force of articulation.

The vowel changes have even less to do with laziness. As David Crystal notes in his debunking, the raising of [æ] to [ε] in standing actually requires more muscular energy to produce, not less. I assume that lowering the diphthong [eɪ] to [æɪ] in Wales would thus take a little bit less energy, but the raising and rounding of [aɪ] to [ɔɪ] would require a little more. In other words, there is no clear pattern of laziness or laxness. Frenkel simply assumes that there’s a standard for which Australians should be aiming and that anything that misses that standard is evidence of laziness, regardless of the actual effort expended.

Even if it were a matter of laziness, the claim that one-third of the articular muscles are always sedentary is absolutely preposterous. There’s no evidence that Frenkel has done any kind of research on the subject; this is just a number pulled from thin air based on his uninformed perceptions of Australian phonetics.

And, again, even if his claims about Australian vocal laxness were true, his claims about the origin of this supposed laxness are still pretty tough to swallow. The early settlers passed on a drunken slur to their children? For that to be even remotely possible, every adult in Australian would have had to be drunk literally all the time, including new mothers. If that were true, Australia would be facing a raging epidemic of fetal alcohol syndrome, not sedentary speech muscles.

As far as I know, there is absolutely zero evidence that Australian settlers were ever that drunk, that constant drunkenness can have an effect on children who aren’t drinking, or that the Australian accent has anything in common with inebriated speech.

When pressed, Frenkel attempts to weasel out of his claims, saying, “I am telling you, it is a theory.” But in his original article, he never claimed that it was a theory; he simply asserted it as fact. And strictly speaking, it isn’t even a theory—at best it’s a hypothesis, because he has clearly done no research to substantiate or verify it.

But all this ridiculousness is just a setup for his real argument, which is that Australians need more training in rhetoric. He says,

If we all received communication training, Australia would become a cleverer country. When rhetoric is presented effectively, it enables content to be communicated in a listener-friendly environment, with well-chosen words spoken at a listenable rate and with balanced volume, fluency, clarity and understandability.

Communication training could certainly be a good thing, but again, there’s a problem—this isn’t rhetoric. Rhetoric is the art of discourse and argumentation; what Frenkel is describing is more like diction or elocution. He’s deploying bad logic and terrible linguistics in service of a completely muddled argument, which is that Australians need to learn to communicate better.

In the end, what really burns me about this story isn’t that Frenkel is peddling a load of tripe but that journalists are so eager to gobble it up. Their ignorance of linguistics is disappointing, but their utter credulousness is completely dismaying. And if that weren’t bad enough, in an effort to present a balanced take on the story, journalists are still giving him credence even when literally every linguist who has commented on it has said that it’s complete garbage.

Huffington Post ran the story with the subhead “It’s a highly controversial theory among other academics”. (They also originally called Frenkel a linguist, but this has been corrected.) But calling Frenkel’s hypothesis “a highly controversial theory among other academics” is like saying that alchemy is a highly controversial theory among chemists or that the flat-earth model is a highly controversial theory among geologists. This isn’t a real controversy, at least not in any meaningful way; it’s one uninformed guy spouting off nonsense and a lot other people calling him on it.

In the end, I think it was Merriam-Webster’s Kory Stamper who had the best response:

aussieaccent

By

Overanxious about Ambiguity

As my last post revealed, a lot of people are concerned—or at least pretend to be concerned—about the use of anxious to mean “eager” or “excited”. They claim that since it has multiple meanings, it’s ambiguous, and thus the disparaged “eager” sense should be avoided. But as I said in my last post, it’s not really ambiguous, and anyone who claims otherwise is simply being uncooperative.

Anxious entered the English language in the the early to mid-1600s in the sense of “troubled in mind; fearful; brooding”. But within a century, the sense had expanded to mean “earnestly desirous” or “eager”. That’s right—the allegedly new sense of the word was already in use before the United States declared independence.

These two meanings existed side by side until the early 1900s, when usage commentators first decided to be bothered by the “eager” sense. And make no mistake—this was a deliberate decision to be bothered. Merriam-Webster’s Dictionary of English Usage includes this anecdote from Alfred Ayres in 1901:

Only a few days ago, I heard a learned man, an LL.D., a dictionary-maker, an expert in English, say that he was anxious to finish the moving of his belongings from one room to another.

“No, you are not,” said I.

“Yes, I am. How do you know?”

“I know you are not.”

“Why, what do you mean?”

“There is no anxiety about it. You are simply desirous.”

Ayres’s correction has nothing to do with clarity or ambiguity. He obviously knew perfectly well what the man meant but decided to rub his nose in his supposed error instead. One can almost hear his self-satisfied smirk as he lectured a lexicographer—a learned man! a doctor of laws!—on the use of the language he was supposed to catalog.

A few years later, Ambrose Bierce also condemned this usage, saying that anxious should not be used to mean “eager” and that it should not be followed by an infinitive. As MWDEU notes, anxious is typically used to mean “eager” when it is followed by an infinitive. But it also says that it’s “an oversimplification” to say that anxious is simply being used to mean “eager”. It notes that “the word, in fact, fairly often has the notion of anxiety mingled with that of eagerness.” That is, anxious is not being used as a mere synonym of eager—it’s being used to indicate not just eagerness but a sort of nervous excitement or anticipation.

MWDEU also says that this sense is the predominant one in the Merriam-Webster citation files, but a search in COCA doesn’t quite bear this out—only about a third of the tokens are followed by to and are clearly used in the “eager” sense. Google Books Ngrams, however, shows that to is by far the most common word that immediately follows anxious; that is, people are anxious to do something far more often than they’re anxious about something.

This didn’t stop one commenter from claiming that not only is this use of anxious confusing, but she’d literally never encountered it before. It’s hard to take such a claim seriously when this use is not only common but has been common for centuries.

It’s also hard to take seriously the claim that it’s ambiguous when nobody can manage to find an example that’s actually ambiguous. A few commenters offered made-up examples that seemed designed to be maximally ambiguous when presented devoid of context. They also ignored the fact that the “eager” sense is almost always followed by an infinitive. That is, as John McIntyre pointed out, no English speaker would say “I was anxious upon hearing that my mother was coming to stay with us” or “I start a new job next week and I’m really anxious about that” if they meant that they were eager or excited.

Another commenter seemed to argue that the problem was that language was changing in an undesirable way, saying, “It’s clearly understood that language evolves, but some of us might prefer a different or better direction for that evolution. . . . Is evolution the de facto response for any misusage in language?”

But this comment has everything backwards. Evolution isn’t the response to misuse—claims of misuse are (occasionally) the response to evolution. The word anxious changed in a very natural way, losing some of its negative edge and being used in a more neutral or positive way. The same thing happened to the word care, which originally meant “to sorrow or grieve” or “to be troubled, uneasy, or anxious”, according to the Oxford English Dictionary. Yet nobody complains that everyone is misusing the word today.

That’s because nobody ever decided to be bothered by it as they did with anxious. The claims of ambiguity or undesired language change are all post hoc; the real objection to this use of anxious was simply that someone decided on the basis of etymology—and in spite of established usage—that it was wrong, and that personal peeve went viral and became established in the usage literature.

It’s remarkably easy to convince yourself that something is an error. All you have to do is hear someone say that it is, and almost immediately you’ll start noticing the error everywhere and recoiling in horror every time you encounter it. And once the idea that it’s an error has become lodged in your brain, it’s remarkably difficult to dislodge it. We come up with an endless stream of bogus arguments to rationalize our pet peeves.

So if you choose to be bothered by this use of anxious, that’s certainly your right. But don’t pretend that you’re doing the language a service.

By

New Shirts, Now on Sale

To make up for not posting for a few months, I’ve added a few new shirts to the Arrant Pedantry Store. Take a look!

I could care fewer

If you see a design you like but want it on a different shirt or other product, you can use the product designer here.

And through September 1, you can get 15 percent off all orders with the coupon code FAVSHIRT.

By

This Is Not the Grammatical Promised Land

I recently became aware of a column in the Chicago Daily Herald by the paper’s managing editor, Jim Baumann, who has taken upon himself the name Grammar Moses. In his debut column, he’s quick to point out that he’s not like the real Moses —“My tablets are not carved in stone. Grammar is a fluid thing.”

He goes on to say, “Some of the rules we learned in high school have evolved with us. For instance, I don’t know a lot of people outside of church who still employ ‘thine’ in common parlance.” (He was taught in high school to use thine in common parlance?)

But then he ends—after a rather lengthy windup—with the old shibboleth of using anxious to mean eager. He says that “generally speaking, the word you’re grasping for is ‘eager,'” ending with the admonition, “Write carefully!”

But as Merriam-Webster’s Dictionary of English Usage notes, this rule is an invention in American usage dating to the early 1900s, and anxious had been used to mean eager for 160 years before the rule proscribing this use was invented. They conclude, “Anyone who says that careful writers do not use anxious in its ‘eager’ sense has simply not examined the available evidence.”

Not a good start for a column that aims for a grammatical middle ground.

And Baumann certainly seems to think he’s aiming for the middle ground. In a later column, he says, “Grammarians fall along a spectrum. There are the fundamentalists, who hold their 50-year-old texts as close to their bosoms as one might a Bible. There are the libertines, who believe that if it feels or sounds right, use it. . . . You’ll find me somewhere in the middle.” He again insists that he’s not a grammar fundamentalist before launching into more invented rules: the supposed misuse of like to mean “such as” or “including” and feel to mean “think”.

He says, “If you listen to a car dealer’s pitch that a new SUV has features like anti-lock brakes and a deluxe stereo, do you really know what you’re getting? Nope. Because ‘like’ means similar to, but not the same.” The argument here is simple, straightforward, and completely wrong.

First, it assumes an overly narrow definition of like. Second, it pretends complete ignorance of any meaning outside of that narrow definition. If a car salesperson tells you that a new SUV has features like anti-lock brakes and a deluxe stereo, you know exactly what you’re getting. In technical terms, pretending that you don’t understand someone is called engaging in uncooperative communication. In layman’s terms, it’s called being an ass.

And yet, strangely, Baumann promotes this rule on the basis of clarity. He says that if something is clear to 9 out of 10 readers, then it’s acceptable, but if you can write something that’s clear to all your readers, then that’s even better. While it’s certainly a good idea to make sure your writing is clear to everyone, I’m also fairly certain that no one would be legitimately confused by “features like anti-lock brakes”. Merriam-Webster’s Dictionary of English Usage doesn’t have much to say on the subject, but it lists several examples and says, “In none of the examples that follow can you detect any ambiguity of meaning.” The supposed lack of clarity simply isn’t there.

Baumann ends by saying, “The lesson is: Think about whom you’re talking to and learn to appreciate his or her or their sensitivities. Then you will achieve clarity.” The problem is that we don’t really know who our readers are and what their sensitivities are. Instead we simply internalize new rules that we learn, and then we project them onto a sort of perversely idealized reader, one who is not merely bothered by such alleged misuses but is impossibly confused by them. How do we know that they’re really confused—or even just irritated—by like to mean “such as” or “including”? We don’t. We just assume that they’re out there and that it’s our job to protect them.

My advice is to try to be as informed as possible about the rules. Be curious, and be willing to question not just others’ claims about the language but also your own assumptions. Read a lot, and pay attention to how good writing works. Get a good usage dictionary and use it. And don’t follow Grammar Moses unless you like wandering in the grammatical wilderness.

%d bloggers like this: