Arrant Pedantry

By

More at Visual Thesaurus

In case you haven’t been following me on Twitter or elsewhere, I’m the newest regular contributor to Visual Thesaurus. You can see my contributor page here. My latest article, “Orwell and Singular ‘They’”, grew out of an experience I had last summer as I was writing a feature article on singular they for Copyediting. I cited George Orwell in a list of well-regarded authors who reportedly used singular they, and my copyeditor queried me on it. She wanted proof.

I did some research and made a surprising discovery: the alleged Orwell quotation in Merriam-Webster’s Dictionary of English Usage wasn’t really from Orwell. But if you want to know the rest, you’ll have to read the article. (It’s for subscribers only, but a subscription is only $19.95 per year.)

But if you’re not the subscribing type, don’t worry: I’ll have a new post up today or tomorrow on the oft-maligned construction reason why.

By

The Enormity of a Usage Problem

Recently on Twitter, Mark Allen wrote, “Despite once being synonyms, ‘enormity’ and ‘enormousness’ are different. Try to keep ‘enormity’ for something evil or outrageous.” I’ll admit right off that this usage problem interests me because I didn’t learn about the distinction until a few years ago. To me, they’re completely synonymous, and the idea of using enormity to mean “an outrageous, improper, vicious, or immoral act” and not “the quality or state of being huge”, as Merriam-Webster defines it, seems almost quaint.

Of course, such usage advice presupposes that people are using the two words synonymously; if they weren’t, there’d be no reason to tell them to keep the words separate, so the assertion that they’re different is really an exhortation to make them different. Given that, I had to wonder how different they really are. I turned to Mark Davies Corpus of Contemporary American English to get an idea of how often enormity is used in the sense of great size rather than outrageousness or immorality. I looked at the first hundred results from the keyword-in-context option, which randomly samples the corpus, and tried to determine which of the four Merriam-Webster definitions was being used. For reference, here are the four definitions:

1 : an outrageous, improper, vicious, or immoral act enormities of state power — Susan Sontag> enormities too juvenile to mention — Richard Freedman>
2 : the quality or state of being immoderate, monstrous, or outrageous; especially : great wickedness enormity of the crimes committed during the Third Reich — G. A. Craig>
3 : the quality or state of being huge : immensity
enormity of the universe>
4 : a quality of momentous importance or impact
enormity of the decision>

In some cases it was a tough call; for instance, when someone writes about the enormity of poverty in India, enormity has a negative connotation, but it doesn’t seem right to substitute a word like monstrousness or wickedness. It seems that the author simply means the size of the problem. I tried to use my best judgement based on the context the corpus provides, but in some cases I weaseled out by assigning a particular use to two definitions. Here’s my count:

1: 1
2: 19
2/3: 3
3: 67
3/4: 1
4: 9

By far the most common use is in the sense of “enormousness”; the supposedly correct senses of great wickedness (definitions 1 and 2) are used just under a quarter of the time. So why did Mr. Allen say that enormity and enormousness were once synonyms? Even the Oxford English Dictionary marks the “enormousness” sense as obsolete and says, “Recent examples might perh. be found, but the use is now regarded as incorrect.” Perhaps? It’s clear from the evidence that it’s still quite common—about three times as common as the prescribed “monstrous wickedness” sense.

It’s true that the sense of immoderateness or wickedness came along before the sense of great size. The first uses as recorded in the OED are in the sense of “a breach of law or morality” (1477), “deviation from moral or legal rectitude” (1480), “something that is abnormal” (a1513), and “divergence from a normal standard or type” (a1538). The sense of “excess in magnitude”—the one that the OED marks as obsolete and incorrect—didn’t come along until 1792. In all these senses the etymology is clear: the word comes from enorm, meaning “out of the norm”.

As is to be expected, Merriam-Webster’s Dictionary of English Usage has an excellent entry on the topic. It notes that many of the uses of enormity considered objectionable carry shades of meaning or connotations not shown by enormousness:

Quite often enormity will be used to suggest a size that is beyond normal bounds, a size that is unexpectedly great. Hence the notion of monstrousness may creep in, but without the notion of wickedness. . . .

In many instances the notion of great size is colored by aspects of the first sense of enormity as defined in Webster’s Second. One common figurative use blends together notions of immoderateness, excess, and monstrousness to suggest a size that is daunting or overwhelming.

Indeed, it’s the blending of senses that made it hard to categorize some of the uses that I came across in COCA. Enormousness does not seem to be a fitting replacement for those blended or intermediate senses, and, as MWDEU notes, it’s never been a popular word anyway. Interestingly, MWDEU also notes that “the reasons for stigmatizing the size sense of enormity are not known.” Perhaps it became rare in the 1800s, when the OED marked it obsolete, and the rule was created before the sense enjoyed a resurgence in the twentieth century. Whatever the reason, I don’t think it makes much sense to condemn the more widely used sense of a word just because it’s newer or was rare at some point in the past. MWDEU sensibly concludes, “We have seen that there is no clear basis for the ‘rule’ at all. We suggest that you follow the writers rather than the critics: writers use enormity with a richness and subtlety that the critics have failed to take account of. The stigmatized sense is entirely standard and has been for more than a century and a half.”

By

Funner Grammar

As I said in the addendum to my last post, maybe I’m not so ready to abandon the technical definition of grammar. In a recent post on Copyediting, Andrea Altenburg criticized the word funner in an ad for Chuck E. Cheese as “improper grammar”, and my first reaction was “That’s not grammar!”

That’s not entirely accurate, of course, as Matt Gordon pointed out to me on Twitter. The objection to funner was originally grammatical, and the Copyediting post does make an appeal to grammar. The argument goes like this: fun is properly a noun, not an adjective, and as a noun, it can’t take comparative or superlative degrees—no funner or funnest.

This seems like a fairly reasonable argument—if a word isn’t an adjective, it can’t inflect like one—but it isn’t the real argument. First of all, it’s not really true that fun was originally a noun. As Ben Zimmer explains in “Dear Apple: Stop the Funnification”, the noun fun arose in the late seventeenth century and was labeled by Samuel Johnson in the mid-1800s “as ‘a low cant word’ of the criminal underworld.” But the earliest citation for fun is as a verb, fourteen years earlier.

As Merriam-Webster’s Dictionary of English Usage
notes, “A couple [of usage commentators] who dislike it themselves still note how nouns have a way of turning into adjectives in English.” Indeed, this sort of functional shift—also called zero derivation or conversion by linguists because they change the part of speech without the means of prefixation or suffixation—is quite common in English. English lacks case endings and has little in the way of verbal endings, so it’s quite easy to change a word from one part of speech to another. The transformation of fun from a verb to a noun to an inflected adjective came slowly but surely.

As this great article explains, shifts in function or meaning usually happen in small steps. Once fun was established as a noun, you could say things like We had fun. This is unambiguously a noun—fun is the object of the verb have. But then you get constructions like The party was fun. This is structurally ambiguous—both nouns and adjectives can go in the slot after was.

This paves the way to analyze fun as an adjective. It then moved into attributive use, directly modifying a following noun, as in fun fair. Nouns can do this too, so once again the structure was ambiguous, but it was evidence that fun was moving further in the direction of becoming an adjective. In the twentieth century it started to be used in more unambiguously adjectival roles. MWDEU says that this accelerated after World War II, and Mark Davies COHA shows that it especially picked up in the last twenty years.

Once fun was firmly established as an adjective, the inflected forms funner and funnest followed naturally. There are only a handful of hits for either in COCA, which attests to the fact that they’re still fairly new and relatively colloquial. But let’s get back to Altenburg’s post.

She says that fun is defined as a noun and thus can’t be inflected for comparative or superlative forms, but then she admits that dictionaries also define fun as an adjective with the forms funner and funnest. But she waves away these definitions by saying, “However, dictionaries are starting to include more definitions for slang that are still not words to the true copyeditor.”

What this means is that she really isn’t objecting to funner on grammatical grounds (at least not in the technical sense); her argument simply reduces to an assertion that funner isn’t a word. But as Stan Carey so excellently argued, “‘Not a word’ is not an argument”. And even the grammatical objections are eroding; many people now simply assert that funner is wrong, even if they accept fun as an adjective, as Grammar Girl says here:

Yet, even people who accept that “fun” is an adjective are unlikely to embrace “funner” and “funnest.” It seems as if language mavens haven’t truly gotten over their irritation that “fun” has become an adjective, and they’ve decided to dig in their heels against “funner” and “funnest.”

It brings to mind the objection against sentential hopefully. Even though there’s nothing wrong with sentence adverbs or with hopefully per se, it was a new usage that drew the ire of the mavens. The grammatical argument against it was essentially a post hoc justification for a ban on a word they didn’t like.

The same thing has happened with funner. It’s perfectly grammatical in the sense that it’s a well-formed, meaningful word, but it’s fairly new and still highly informal and colloquial. (For the record, it’s not slang, either, but that’s a post for another day.) If you don’t want to use it, that’s your right, but stop saying that it’s not a word.

By

Relative What

A few months ago Braden asked in a comment about the history of what as a relative pronoun. (For my previous posts on relative pronouns, see here.) The history of relative pronouns in English is rather complicated, and the system as a whole is still in flux, partly because modern English essentially has two overlapping systems of relativization.

In Old English, there were a few different ways to create a relative pronoun, as this site explains. One way was to use the indeclinable particle þe, another was to use a form of the demonstrative pronoun (roughly equivalent to modern English that/those), and another was to use a demonstrative or personal pronoun followed by þe. Our modern relative that grew out of the use of demonstrative pronouns, though unlike the Old English demonstratives, that does not decline for gender, number, and case.

In the late Old English and Middle English periods, writers and speakers began to use interrogative pronouns as relative pronouns by analogy with French and Latin. It first appeared in texts that were translations from Latin around 1000 AD, but within a couple of centuries it had apparently been naturalized. Other interrogatives became pressed into service as relatives during this time, including who, which, where, when, why, and how. All of these are still in common use in Standard English except for what.

It’s important to note that what is still used as a nominal relative, which means that it does not modify another noun phrase but stands in for a noun phrase and a relative simultaneously, as in We fear what we don’t understand. This could be rephrased as We fear that which we don’t understand or We fear the things that we don’t understand, revealing the nominal and the relative.

But while all the other interrogatives have continued as relatives in Standard English, what as a simple relative pronoun is nonstandard today. Simple relative what is found in the works of Shakespeare and the King James Bible, but at some point in the last three or four centuries it fell out of use in the standard dialect. Unfortunately, I’m not really sure when this happened; the Oxford English Dictionary has citations up through 1740 and then one from 1920 that appears to be dialogue from a novel. Merriam-Webster’s Dictionary of English Usage says that in the US, it’s mainly found in rural areas in the Midland and South. As I told Braden in a response to his comment, I’ve heard it used myself. A couple of months ago I heard a man in church pray for “our leaders what guides and directs us”—not just a beautiful example of relative what, but also an interesting example of nonstandard verb agreement.

As for why simple relative what died out in Standard English, I really have no idea. Jonathan Hope noted that it’s rather unusual of Standard English to allow other interrogatives as relatives but not this one.[1] In some ways, relative what would make more sense than relative which, since what is historically part of the same paradigm as who; what comes from the neuter form of the interrogative or indefinite pronoun in Old English, while who comes from the combined masculine/feminine form, as shown here. And as I said in this post, whose was originally the genitive form for both who and what, so allowing simple relative what would make for a rather tidy paradigm.

Perhaps that’s the problem. Hope and other have argued that standardized languages—or perhaps speakers of standardized languages—tend to resist tidy paradigms. Irregularities creep in and are preserved, and they can be surprisingly resistant to change. Maybe someone reading this has a fuller explanation of just how this particular little wrinkle came to be.

  1. [1] Jonathan Hope, “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000).

By

Whose Pronoun Is That?

In my last post I touched on the fact that whose as a relative possessive adjective referring to inanimate objects feels a little strange to some people. In a submission for the topic suggestion contest, Jake asked about the use of that with animate referents (“The woman that was in the car”) and then said, “On the flip side, consider ‘the couch, whose cushion is blue.’ ‘Who’ is usually used for animate subjects. Why don’t we have the word ‘whichs’ for inanimate ones?”

Merriam-Webster’s Dictionary of English Usage (one of my favorite books on language; if you don’t already own it, you should buy it now—seriously.) says that it has been in use from the fourteenth century to the present but that it wasn’t until the eighteenth century that grammarians like Bishop Lowth (surprise, surprise) started to cast aspersions on its use.

MWDEU concludes that “the notion that whose may not properly be used of anything except persons is a superstition; it has been used by innumerable standard authors from Wycliffe to Updike, and is entirely standard as an alternative to of which the in all varieties of discourse.” Bryan A. Garner, in his Garner’s Modern American Usage, says somewhat more equivocally, “Whose may usefully refer to things ⟨an idea whose time has come⟩. This use of whose, formerly decried by some 19th-century grammarians and their predecessors, is often an inescapable way of avoiding clumsiness.” He ranks it a 5—“universally adopted except for a few eccentrics”—but his tone leaves one feeling as if he thinks it the lesser of two evils.

But how did we end up in this situation in the first place? Why don’t we have a whiches or thats or something equivalent? MWDEU notes that “English is not blessed with a genitive form for that or which“, but to understand why, you have to go back to Old English and the loss of the case system in Early Middle English.

First of all, Old English did not use interrogative pronouns (who, which, or what) as relative pronouns. It either used demonstrative pronouns—whence our modern that is descended—or the invariable complementizer þe, which we’ll ignore for now. The demonstrative pronouns declined for gender, number, and case, just like the demonstrative and relative pronouns of modern German. The important point is that in Old English, the relative pronouns looked like this:

that
Case Masculine Neuter Feminine Plural
Nominative se þæt sēo þā
Accusative þone þæt þā þā
Genitive þæs þæs þǣre þāra, þǣra
Dative þǣm þǣm þǣre þǣm, þām
Instrumental þȳ, þon þȳ, þon

(Taken from Wikipedia.org. The þ is a thorn, which represents a “th” sound.)

As the Old English case system disappeared, this all reduced to the familiar that, which you can see comes from the neuter nominative/accusative form. The genitive, or possessive, form was lost. And in Middle English, speakers began to use interrogative pronouns as relatives, probably under the influence of French. Here’s what the Old English interrogative pronouns looked like:

who/what
Case Masculine/Feminine Neuter Plural
Nominative hwā hwæt hwā/hwæt
Accusative hwone hwæt hwone/hwæt
Genitive hwæs hwæs hwæs
Dative hwǣm hwǣm hwǣm
Instrumental hwȳ hwȳ hwǣm

(Wikipedia didn’t have an article or section on Old English interrogative pronouns, so I borrowed the forms from Wikibooks.)

On the masculine/feminine side, we get the ancestors of our modern who/whom/whose (hwā/hwǣm/hwæs), and on the neuter side, we get the ancestor of what (hwæt). Notice that the genitive forms for the two are the same—that is, although we think of whose being the possessive form of who, it’s historically also the possessive form of what.

But we don’t use what as a relative pronoun (well, some dialects do, but Standard English doesn’t); we use which instead. Which also had the full paradigm of case endings just like who/what that. But rather than bore you with more tables full of weird-looking characters, I’ll cut to the chase: which originally had a genitive form, but it too was lost when the Old English case system disappeared.

So of all the demonstrative and interrogative pronouns in English, only one survived with its own genitive form, who. (I don’t know why who hung on to its case forms while the others lost theirs; maybe that’s a topic for another day.) Speakers quite naturally used whose to fill that gap—and keep in mind that it was originally the genitive form of both the animate and inanimate forms of the interrogative pronoun, so English speakers originally didn’t have any qualms about employing it with inanimate relative pronouns, either.

But what does that mean for us today? Well, on the one hand, you can argue that whose as an inanimate relative possessive adjective has a long, well-established history. It’s been used by the best writers for centuries, so there’s no question that it’s standard. But on the other hand, this ignores the fact that some people think there’s something not quite right about it. After all, we don’t use whose as a possessive form of which or that in their interrogative or demonstrative functions. And although it has a long pedigree, another inanimate possessive with a long pedigree fell out of use and was replaced.

His was originally the possessive form of both he and it, but neuter his started to fall out of use and be replaced by a new form its in the sixteenth century. After English lost grammatical gender, people began to use he and she only for people and other animate things and it only for inanimate things. They started to feel a little uncomfortable using the original possessive form of it, his, with inanimate things, so they fashioned a new possessive, its, to replace it.

In other words, there’s precedence for disfavoring inanimate whose and using another word or construction instead. Unfortunately, now thats or whiches will never get off the ground, because they’ll be so heavily stigmatized as nonstandard forms. There are two different impulses fighting one another here: the impulse to have a full and symmetrical paradigm and the impulse to avoid using animate pronouns for inanimate things. Only time will tell which one wins out. For now, I’d say it’s good to remember that inanimate whose is frequently used by good writers and that there’s nothing wrong with it per se. In your own writing, just trust your ear.

By

10:30 o’clock

My sister-in-law will soon graduate from high school, and we recently got her graduation announcement in the mail. It was pretty standard stuff—a script font in metallic ink on nice paper—but one small detail caught my eye. It says the commencement exercises will take place at “ten-thirty o’clock.” As far as I can remember, I’ve never before heard a rule against using “o’clock” with times other than the hour, but it struck me as wrong.

I checked Merriam-Webster first, but it was no help; all it says is “according to the clock,” though its example sentence is “the time is three o’clock.” I then pulled out my copy of Merriam-Webster’s Dictionary of English Usage, but it didn’t even have an entry for o’clock or clock. So then, because my wife was on the computer and I couldn’t access the OED online, I pulled out my compact OED and magnifying glass to see if it had anything to say.

Once I had flipped to the entry and scanned through the minuscule type, I found this one line: “The hour of the day is expressed by a cardinal numeral, followed by a phrase which was originally of the clock, now only retained in formal phraseology; shortened subsequently to . . . o’clock.” The citations begin with Chaucer and continue up to modern English.

And then, out of curiosity, I checked the Corpus of Contemporary American English, but I couldn’t find any examples of x:30 o’clock. Google, however, turned up plenty of examples, including a thread on Amazon’s Askville asking why you can’t say “11:30 o’clock.” The best explanation there seems to be that since the clock hands aren’t pointing at a specific hour, it can’t be anything-o’clock.

This answer doesn’t seem quite satisfying to me—it doesn’t explain why the hour hand has to be pointing directly at a number or why the minute hand doesn’t matter. But then I remembered that clock originally meant “bell” and that early clocks chimed on the hour (well, I suppose some modern clocks do too, but you see where I’m going). Early mechanical clocks were rather large, and most people measured time not by checking the clock face to see where the hands were, but by counting the number of chimes on the hour. So I would assume that this is why it sounds strange to use “o’clock” with fractions of hours. Thoughts, anyone?

By

How I Became a Descriptivist

Believe it or not, I wasn’t always the grammar free-love hippie that I am now. I actually used to be known as quite a grammar nazi. This was back in my early days as an editor (during my first year or two of college) when I was learning lots of rules about grammar and usage and style, but before I had gotten into my major classes in English language, which introduced me to a much more descriptivist approach.

It was a gradual progression, starting with my class in modern American usage. Our textbook was Merriam-Webster’s Dictionary of English Usage, which is a fantastic resource for anyone interested in editing or the English language in general. The class opened my eyes to the complexities of usage issues and made me realize that few issues are as black-and-white as most prescriptivists would have you believe. And this was in a class in the editing minor of all places.

My classes in the English language major did even more to change my opinions about prescriptivism and descriptivism. Classes in Old English and the history of the English language showed me that although the language has changed dramatically over the centuries, it has never fallen into a state of chaos and decay. There has been clear, beautiful, compelling writing in every stage of the language (well, as long as there have been literate Anglo-Saxons, anyway).

But I think the final straw was annoyance with a lot of my fellow editors. Almost none of them seemed interested in doing anything other than following the strictures laid out in style guides and usage manuals (Merriam-Webster’s Dictionary of English Usage was somehow exempt from reference). And far too often, the changes they made did nothing to improve the clarity, readability, or accuracy of the text. Without any depth of knowledge about the issues, they were left without the ability to make informed judgements about what should be changed.

In fact, I would say that you can’t be a truly great editor unless you learn to approach things from a descriptivist perspective. And in the end, you’re still deciding how the text should be instead of simply talking about how it is, so you haven’t fully left prescriptivism behind. But it will be an informed prescriptivism, based on facts about current and historical usage, with a healthy dose of skepticism towards the rhetoric coming from the more fundamentalist prescriptivists.

And best of all, you’ll find that the sky won’t fall and the language won’t rapidly devolve into caveman grunts just because you stopped correcting all the instances of figurative over to more than. Everybody wins.

By

One Fewer Usage Error

In my mind, less and fewer illustrates quite well virtually all of the problems of prescriptivism: the codification of the opinion of some eighteenth-century writer, the disregard for well over a millennium of usage, the insistence on the utility in a superfluous distinction, and the oversimplification of the original rule leading to hypercorrection.

I found a very lovely example of hypercorrection the other day in The New York Times: “The figures are adjusted for one fewer selling day this September than a year ago.” Not even stuffy constructions like “10 items or fewer” make me cringe the way that made me cringe.

No usage or style guide that I know of recommends this usage. In my experience, most guides that enforce the less/fewer distinction grant exceptions when dealing with things like money, distance, or time or when following the word one. And why, exactly, is one an exception? I’m really not sure, but my best guess is that it sounds so strange that even the most strictly logical prescriptivists admit that less must be the correct choice.

Merriam-Webster’s Dictionary of English Usage has an excellent entry on less/fewer, but surprisingly, regarding the “one fewer” issue it says only, “And of course [less] follows one.” Perhaps the use of “one fewer” is so rare that the editors didn’t think to say more about it. Obviously someone should’ve said something to the copy editor at The New York Times.