Arrant Pedantry


To Boldly Split Infinitives

Today is the fiftieth anniversary of the first airing of Star Trek, so I thought it was a good opportunity to talk about split infinitives. (So did Merriam-Webster, which beat me to the punch.) If you’re unfamiliar with split infinitives or have thankfully managed to forget what they are since your high school days, it’s when you put some sort of modifier between the to and the infinitive verb itself—that is, a verb that is not inflected for tense, like be or go—and for many years it was considered verboten.

Kirk’s opening monologue on the show famously featured the split infinitive “to boldly go”, and it’s hard to imagine the phrase working so well without it. “To go boldly” and “boldly to go” both sound terribly clunky, partly because they ruin the rhythm of the phrase. “To BOLDly GO” is a nice iambic bimeter, meaning that it has two metrical feet, each consisting of an unstressed syllable followed by a stressed syllable—duh-DUN duh-DUN. “BOLDly to GO” is a trochee followed by an iamb, meaning that we have a stressed syllable, two unstressed syllables, and then another stressed syllable—DUN-duh duh-DUN. “To GO BOLDly” is the reverse, an iamb followed by a trochee, leading to a stress clash in the middle where the two stresses butt up against each other and then ending on a weaker unstressed syllable. Blech.

But the root of the alleged problem with split infinitives concerns not meter but syntax. The question is where it’s syntactically permissible to put a modifier in a to-infinitive phrase. Normally, an adverb would go just in front of the verb it modifies, as in She boldly goes or He will boldly go. Things were a little different when the verb was an infinitive form preceded by to. In this case the adverb often went in front of the to, not in front of the verb itself.

As Merriam-Webster’s post notes, split infinitives date back at least to the fourteenth century, though they were not as common back then and were often used in different ways than they are today. But they mostly fell out of use in the sixteenth century and then roared back to life in the eighteenth century, only to be condemned by usage commentators in the nineteenth and twentieth centuries. (Incidentally, this illustrates a common pattern of prescriptivist complaints: a new usage arises, or perhaps it has existed for literally millennia, it goes unnoticed for decades or even centuries, someone finally notices it and decides they don’t like it (often because they don’t understand it), and suddenly everyone starts decrying this terrible new thing that’s ruining English.)

It’s not particularly clear, though, why people thought that this particular thing was ruining English. The older boldly to go was replaced by the resurgent to boldly go. It’s often claimed that people objected to split infinitives on the basis of analogy with Latin (Merriam-Webster’s post repeats this claim). In Latin, an infinitive is a single word, like ire, and it can’t be split. Ergo, since you can’t split infinitives in Latin, you shouldn’t be able to split them in English either. The problem with this theory is that there’s no evidence to support it. Here’s the earliest recorded criticism of the split infinitive, according to Wikipedia:

The practice of separating the prefix of the infinitive mode from the verb, by the intervention of an adverb, is not unfrequent among uneducated persons. . . . I am not conscious, that any rule has been heretofore given in relation to this point. . . . The practice, however, of not separating the particle from its verb, is so general and uniform among good authors, and the exceptions are so rare, that the rule which I am about to propose will, I believe, prove to be as accurate as most rules, and may be found beneficial to inexperienced writers. It is this :—The particle, TO, which comes before the verb in the infinitive mode, must not be separated from it by the intervention of an adverb or any other word or phrase; but the adverb should immediately precede the particle, or immediately follow the verb.

No mention of Latin or of the supposed unsplittability of infinitives. In fact, the only real argument is that uneducated people split infinitives, while good authors didn’t. Some modern usage commentators have used this purported Latin origin of the rule as the basis of a straw-man argument: Latin couldn’t split infinitives, but English isn’t Latin, so the rule isn’t valid. Unfortunately, Merriam-Webster’s post does the same thing:

The rule against splitting the infinitive comes, as do many of our more irrational rules, from a desire to more rigidly adhere (or, if you prefer, “to adhere more rigidly”) to the structure of Latin. As in Old English, Latin infinitives are written as single words: there are no split infinitives, because a single word is difficult to split. Some linguistic commenters have pointed out that English isn’t splitting its infinitives, since the word to is not actually a part of the infinitive, but merely an appurtenance of it.

The problem with this argument (aside from the fact that the rule wasn’t based on Latin) is that modern English infinitives—not just Old English infinitives—are only one word too and can’t be split either. The infinitive in to boldly go is just go, and go certainly can’t be split. So this line of argument misses the point: the question isn’t whether the infinitive verb, which is a single word, can be split in half, but whether an adverb can be placed between to and the verb. As Merriam-Webster’s Dictionary of English Usage notes, the term split infinitive is a misnomer, since it’s not really the infinitive but the construction containing an infinitive that’s being split.

But in recent years I’ve seen some people take this terminological argument even further, saying that split infinitives don’t even exist because English infinitives can’t be split. I think this is silly. Of course they exist. It used to be that people would say boldly to go; then they started saying to boldly go instead. It doesn’t matter what you call the phenomenon of moving the adverb so that it’s snug up against the verb—it’s still a phenomenon. As Arnold Zwicky likes to say, “Labels are not definitions.” Just because the name doesn’t accurately describe the phenomenon doesn’t mean it doesn’t exist. We could call this phenomenon Steve, and it wouldn’t change what it is.

At this point, the most noteworthy thing about the split infinitive is that there are still some people who think there’s something wrong with it. The original objection was that it was wrong because uneducated people used it and good writers didn’t, but that hasn’t been true in decades. Most usage commentators have long since given up their objections to it, and some even point out that avoiding a split infinitive can cause awkwardness or even ambiguity. In his book The Sense of Style, Steven Pinker gives the example The board voted immediately to approve the casino. Which word does immediately modify—voted or approve?

But this hasn’t stopped The Economist from maintaining its opposition to split infinitives. Its style guide says, “Happy the man who has never been told that it is wrong to split an infinitive: the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it.”

I call BS on this. Most usage commentators have moved on, and I suspect that most laypeople either don’t know or don’t care what a split infinitive is. I don’t think I know a single copy editor who’s bothered by them. If you’ve been worrying about splitting infinitives since your high school English teacher beat the fear of them into you, it’s time to let it go. If they’re good enough for Star Trek, they’re good enough for you too.

But just for fun, let’s do a little poll:

Do you find split infinitives annoying?

View Results

Loading ... Loading ...


On a Collision Course with Reality

In a blog post last month, John McIntyre took the editors of the AP Stylebook to task for some of the bad rules they enforce. One of these was the notion that “two objects must be in motion to collide, that a moving object cannot collide with a stationary object.” That is, according to the AP Stylebook, a car cannot collide with a tree, because the tree is not moving, and it can only collide with another car if that other car is moving. McIntyre notes that this rule is not supported by Fowler’s Modern English Usage or even mentioned in Garner’s Modern American Usage.

Merriam-Webster’s Dictionary of English Usage does have an entry for collide and notes that the rule is a tradition (read “invention”) of American newspaper editors. It’s not even clear where the rule came from or why; there’s nothing in the etymology of the word to suggest that only two objects in motion can collide. It comes from the Latin collidere, meaning “to strike together”, from com- “together” + laedere “to strike”.

The rule is not supported by traditional usage either. Speakers and writers of English have been using collide to refer to bodies that are not both in motion for as long as the word has been in use, which is roughly four hundred years. Nor is the rule an attempt to slow language change or hang on to a fading distinction; it’s an attempt to create a distinction and impose it on everyone who uses the language, or at least journalists.

What I found especially baffling was the discussion that took place on Mr. McIntyre’s Facebook page when he shared the link there. Several people chimed in to defend the rule, with one gentleman saying, “There’s an unnecessary ambiguity when ‘collides’ involves <2 moving objects.” Mr. McIntyre responded, “Only if you imagine one.” And this is key: collide is ambiguous only if you have been taught that it is ambiguous—or in other words, only if you’re a certain kind of journalist.

In that Facebook discussion, I wrote,

So the question is, is this actually a problem that needs to be solved? Are readers constantly left scratching their heads because they see “collided with a tree” and wonder how a tree could have been moving? If nobody has ever found such phrasing confusing, then insisting on different phrasing to avoid potential ambiguity is nothing but a waste of time. It’s a way to ensure that editors have work to do, not a way to ensure that editors are adding benefit for the readers.

The discussion thread petered out after that.

I’m generally skeptical of the usefulness of invented distinctions, but this one seems especially useless. When would it be important to distinguish between a crash involving two moving objects and one involving only one moving object? Wouldn’t it be clear from context anyway? And if it’s not clear from context, how on earth would we expect most readers—who have undoubtedly never heard of this journalistic shibboleth—to pick up on it? Should we avoid using words like crash or struck because they’re ambiguous in the same way—because they don’t tell us whether both objects were moving?

It doesn’t matter how rigorously you follow the rule in your own writing or in the writing you edit; if your readers think that collide is synonymous with crash, then they will assume that your variation between collide and crash is merely stylistic. They’ll have no idea that you’re trying to communicate something else. If it’s important, they’ll probably deduce from context whether both objects were moving, regardless of the word you use.

In other words, if an editor makes a distinction and no reader picks up on it, is it still useful?


Overanxious about Ambiguity

As my last post revealed, a lot of people are concerned—or at least pretend to be concerned—about the use of anxious to mean “eager” or “excited”. They claim that since it has multiple meanings, it’s ambiguous, and thus the disparaged “eager” sense should be avoided. But as I said in my last post, it’s not really ambiguous, and anyone who claims otherwise is simply being uncooperative.

Anxious entered the English language in the the early to mid-1600s in the sense of “troubled in mind; fearful; brooding”. But within a century, the sense had expanded to mean “earnestly desirous” or “eager”. That’s right—the allegedly new sense of the word was already in use before the United States declared independence.

These two meanings existed side by side until the early 1900s, when usage commentators first decided to be bothered by the “eager” sense. And make no mistake—this was a deliberate decision to be bothered. Merriam-Webster’s Dictionary of English Usage includes this anecdote from Alfred Ayres in 1901:

Only a few days ago, I heard a learned man, an LL.D., a dictionary-maker, an expert in English, say that he was anxious to finish the moving of his belongings from one room to another.

“No, you are not,” said I.

“Yes, I am. How do you know?”

“I know you are not.”

“Why, what do you mean?”

“There is no anxiety about it. You are simply desirous.”

Ayres’s correction has nothing to do with clarity or ambiguity. He obviously knew perfectly well what the man meant but decided to rub his nose in his supposed error instead. One can almost hear his self-satisfied smirk as he lectured a lexicographer—a learned man! a doctor of laws!—on the use of the language he was supposed to catalog.

A few years later, Ambrose Bierce also condemned this usage, saying that anxious should not be used to mean “eager” and that it should not be followed by an infinitive. As MWDEU notes, anxious is typically used to mean “eager” when it is followed by an infinitive. But it also says that it’s “an oversimplification” to say that anxious is simply being used to mean “eager”. It notes that “the word, in fact, fairly often has the notion of anxiety mingled with that of eagerness.” That is, anxious is not being used as a mere synonym of eager—it’s being used to indicate not just eagerness but a sort of nervous excitement or anticipation.

MWDEU also says that this sense is the predominant one in the Merriam-Webster citation files, but a search in COCA doesn’t quite bear this out—only about a third of the tokens are followed by to and are clearly used in the “eager” sense. Google Books Ngrams, however, shows that to is by far the most common word that immediately follows anxious; that is, people are anxious to do something far more often than they’re anxious about something.

This didn’t stop one commenter from claiming that not only is this use of anxious confusing, but she’d literally never encountered it before. It’s hard to take such a claim seriously when this use is not only common but has been common for centuries.

It’s also hard to take seriously the claim that it’s ambiguous when nobody can manage to find an example that’s actually ambiguous. A few commenters offered made-up examples that seemed designed to be maximally ambiguous when presented devoid of context. They also ignored the fact that the “eager” sense is almost always followed by an infinitive. That is, as John McIntyre pointed out, no English speaker would say “I was anxious upon hearing that my mother was coming to stay with us” or “I start a new job next week and I’m really anxious about that” if they meant that they were eager or excited.

Another commenter seemed to argue that the problem was that language was changing in an undesirable way, saying, “It’s clearly understood that language evolves, but some of us might prefer a different or better direction for that evolution. . . . Is evolution the de facto response for any misusage in language?”

But this comment has everything backwards. Evolution isn’t the response to misuse—claims of misuse are (occasionally) the response to evolution. The word anxious changed in a very natural way, losing some of its negative edge and being used in a more neutral or positive way. The same thing happened to the word care, which originally meant “to sorrow or grieve” or “to be troubled, uneasy, or anxious”, according to the Oxford English Dictionary. Yet nobody complains that everyone is misusing the word today.

That’s because nobody ever decided to be bothered by it as they did with anxious. The claims of ambiguity or undesired language change are all post hoc; the real objection to this use of anxious was simply that someone decided on the basis of etymology—and in spite of established usage—that it was wrong, and that personal peeve went viral and became established in the usage literature.

It’s remarkably easy to convince yourself that something is an error. All you have to do is hear someone say that it is, and almost immediately you’ll start noticing the error everywhere and recoiling in horror every time you encounter it. And once the idea that it’s an error has become lodged in your brain, it’s remarkably difficult to dislodge it. We come up with an endless stream of bogus arguments to rationalize our pet peeves.

So if you choose to be bothered by this use of anxious, that’s certainly your right. But don’t pretend that you’re doing the language a service.


This Is Not the Grammatical Promised Land

I recently became aware of a column in the Chicago Daily Herald by the paper’s managing editor, Jim Baumann, who has taken upon himself the name Grammar Moses. In his debut column, he’s quick to point out that he’s not like the real Moses —“My tablets are not carved in stone. Grammar is a fluid thing.”

He goes on to say, “Some of the rules we learned in high school have evolved with us. For instance, I don’t know a lot of people outside of church who still employ ‘thine’ in common parlance.” (He was taught in high school to use thine in common parlance?)

But then he ends—after a rather lengthy windup—with the old shibboleth of using anxious to mean eager. He says that “generally speaking, the word you’re grasping for is ‘eager,'” ending with the admonition, “Write carefully!”

But as Merriam-Webster’s Dictionary of English Usage notes, this rule is an invention in American usage dating to the early 1900s, and anxious had been used to mean eager for 160 years before the rule proscribing this use was invented. They conclude, “Anyone who says that careful writers do not use anxious in its ‘eager’ sense has simply not examined the available evidence.”

Not a good start for a column that aims for a grammatical middle ground.

And Baumann certainly seems to think he’s aiming for the middle ground. In a later column, he says, “Grammarians fall along a spectrum. There are the fundamentalists, who hold their 50-year-old texts as close to their bosoms as one might a Bible. There are the libertines, who believe that if it feels or sounds right, use it. . . . You’ll find me somewhere in the middle.” He again insists that he’s not a grammar fundamentalist before launching into more invented rules: the supposed misuse of like to mean “such as” or “including” and feel to mean “think”.

He says, “If you listen to a car dealer’s pitch that a new SUV has features like anti-lock brakes and a deluxe stereo, do you really know what you’re getting? Nope. Because ‘like’ means similar to, but not the same.” The argument here is simple, straightforward, and completely wrong.

First, it assumes an overly narrow definition of like. Second, it pretends complete ignorance of any meaning outside of that narrow definition. If a car salesperson tells you that a new SUV has features like anti-lock brakes and a deluxe stereo, you know exactly what you’re getting. In technical terms, pretending that you don’t understand someone is called engaging in uncooperative communication. In layman’s terms, it’s called being an ass.

And yet, strangely, Baumann promotes this rule on the basis of clarity. He says that if something is clear to 9 out of 10 readers, then it’s acceptable, but if you can write something that’s clear to all your readers, then that’s even better. While it’s certainly a good idea to make sure your writing is clear to everyone, I’m also fairly certain that no one would be legitimately confused by “features like anti-lock brakes”. Merriam-Webster’s Dictionary of English Usage doesn’t have much to say on the subject, but it lists several examples and says, “In none of the examples that follow can you detect any ambiguity of meaning.” The supposed lack of clarity simply isn’t there.

Baumann ends by saying, “The lesson is: Think about whom you’re talking to and learn to appreciate his or her or their sensitivities. Then you will achieve clarity.” The problem is that we don’t really know who our readers are and what their sensitivities are. Instead we simply internalize new rules that we learn, and then we project them onto a sort of perversely idealized reader, one who is not merely bothered by such alleged misuses but is impossibly confused by them. How do we know that they’re really confused—or even just irritated—by like to mean “such as” or “including”? We don’t. We just assume that they’re out there and that it’s our job to protect them.

My advice is to try to be as informed as possible about the rules. Be curious, and be willing to question not just others’ claims about the language but also your own assumptions. Read a lot, and pay attention to how good writing works. Get a good usage dictionary and use it. And don’t follow Grammar Moses unless you like wandering in the grammatical wilderness.


More at Visual Thesaurus

In case you haven’t been following me on Twitter or elsewhere, I’m the newest regular contributor to Visual Thesaurus. You can see my contributor page here. My latest article, “Orwell and Singular ‘They'”, grew out of an experience I had last summer as I was writing a feature article on singular they for Copyediting. I cited George Orwell in a list of well-regarded authors who reportedly used singular they, and my copyeditor queried me on it. She wanted proof.

I did some research and made a surprising discovery: the alleged Orwell quotation in Merriam-Webster’s Dictionary of English Usage wasn’t really from Orwell. But if you want to know the rest, you’ll have to read the article. (It’s for subscribers only, but a subscription is only $19.95 per year.)

But if you’re not the subscribing type, don’t worry: I’ll have a new post up today or tomorrow on the oft-maligned construction reason why.


The Enormity of a Usage Problem

Recently on Twitter, Mark Allen wrote, “Despite once being synonyms, ‘enormity’ and ‘enormousness’ are different. Try to keep ‘enormity’ for something evil or outrageous.” I’ll admit right off that this usage problem interests me because I didn’t learn about the distinction until a few years ago. To me, they’re completely synonymous, and the idea of using enormity to mean “an outrageous, improper, vicious, or immoral act” and not “the quality or state of being huge”, as Merriam-Webster defines it, seems almost quaint.

Of course, such usage advice presupposes that people are using the two words synonymously; if they weren’t, there’d be no reason to tell them to keep the words separate, so the assertion that they’re different is really an exhortation to make them different. Given that, I had to wonder how different they really are. I turned to Mark Davies Corpus of Contemporary American English to get an idea of how often enormity is used in the sense of great size rather than outrageousness or immorality. I looked at the first hundred results from the keyword-in-context option, which randomly samples the corpus, and tried to determine which of the four Merriam-Webster definitions was being used. For reference, here are the four definitions:

1 : an outrageous, improper, vicious, or immoral act <the enormities of state power — Susan Sontag> <other enormities too juvenile to mention — Richard Freedman>
2 : the quality or state of being immoderate, monstrous, or outrageous; especially : great wickedness <the enormity of the crimes committed during the Third Reich — G. A. Craig>
3 : the quality or state of being huge : immensity <the inconceivable enormity of the universe>
4 : a quality of momentous importance or impact <the enormity of the decision>>

In some cases it was a tough call; for instance, when someone writes about the enormity of poverty in India, enormity has a negative connotation, but it doesn’t seem right to substitute a word like monstrousness or wickedness. It seems that the author simply means the size of the problem. I tried to use my best judgement based on the context the corpus provides, but in some cases I weaseled out by assigning a particular use to two definitions. Here’s my count:

1: 1
2: 19
2/3: 3
3: 67
3/4: 1
4: 9

By far the most common use is in the sense of “enormousness”; the supposedly correct senses of great wickedness (definitions 1 and 2) are used just under a quarter of the time. So why did Mr. Allen say that enormity and enormousness were once synonyms? Even the Oxford English Dictionary marks the “enormousness” sense as obsolete and says, “Recent examples might perh. be found, but the use is now regarded as incorrect.” Perhaps? It’s clear from the evidence that it’s still quite common—about three times as common as the prescribed “monstrous wickedness” sense.

It’s true that the sense of immoderateness or wickedness came along before the sense of great size. The first uses as recorded in the OED are in the sense of “a breach of law or morality” (1477), “deviation from moral or legal rectitude” (1480), “something that is abnormal” (a1513), and “divergence from a normal standard or type” (a1538). The sense of “excess in magnitude”—the one that the OED marks as obsolete and incorrect—didn’t come along until 1792. In all these senses the etymology is clear: the word comes from enorm, meaning “out of the norm”.

As is to be expected, Merriam-Webster’s Dictionary of English Usage has an excellent entry on the topic. It notes that many of the uses of enormity considered objectionable carry shades of meaning or connotations not shown by enormousness:

Quite often enormity will be used to suggest a size that is beyond normal bounds, a size that is unexpectedly great. Hence the notion of monstrousness may creep in, but without the notion of wickedness. . . .

In many instances the notion of great size is colored by aspects of the first sense of enormity as defined in Webster’s Second. One common figurative use blends together notions of immoderateness, excess, and monstrousness to suggest a size that is daunting or overwhelming.

Indeed, it’s the blending of senses that made it hard to categorize some of the uses that I came across in COCA. Enormousness does not seem to be a fitting replacement for those blended or intermediate senses, and, as MWDEU notes, it’s never been a popular word anyway. Interestingly, MWDEU also notes that “the reasons for stigmatizing the size sense of enormity are not known.” Perhaps it became rare in the 1800s, when the OED marked it obsolete, and the rule was created before the sense enjoyed a resurgence in the twentieth century. Whatever the reason, I don’t think it makes much sense to condemn the more widely used sense of a word just because it’s newer or was rare at some point in the past. MWDEU sensibly concludes, “We have seen that there is no clear basis for the ‘rule’ at all. We suggest that you follow the writers rather than the critics: writers use enormity with a richness and subtlety that the critics have failed to take account of. The stigmatized sense is entirely standard and has been for more than a century and a half.”


Funner Grammar

As I said in the addendum to my last post, maybe I’m not so ready to abandon the technical definition of grammar. In a recent post on Copyediting, Andrea Altenburg criticized the word funner in an ad for Chuck E. Cheese as “improper grammar”, and my first reaction was “That’s not grammar!”

That’s not entirely accurate, of course, as Matt Gordon pointed out to me on Twitter. The objection to funner was originally grammatical, and the Copyediting post does make an appeal to grammar. The argument goes like this: fun is properly a noun, not an adjective, and as a noun, it can’t take comparative or superlative degrees—no funner or funnest.

This seems like a fairly reasonable argument—if a word isn’t an adjective, it can’t inflect like one—but it isn’t the real argument. First of all, it’s not really true that fun was originally a noun. As Ben Zimmer explains in “Dear Apple: Stop the Funnification”, the noun fun arose in the late seventeenth century and was labeled by Samuel Johnson in the mid-1800s “as ‘a low cant word’ of the criminal underworld.” But the earliest citation for fun is as a verb, fourteen years earlier.

As Merriam-Webster’s Dictionary of English Usage notes, “A couple [of usage commentators] who dislike it themselves still note how nouns have a way of turning into adjectives in English.” Indeed, this sort of functional shift—also called zero derivation or conversion by linguists because they change the part of speech without the means of prefixation or suffixation—is quite common in English. English lacks case endings and has little in the way of verbal endings, so it’s quite easy to change a word from one part of speech to another. The transformation of fun from a verb to a noun to an inflected adjective came slowly but surely.

As this great article explains, shifts in function or meaning usually happen in small steps. Once fun was established as a noun, you could say things like We had fun. This is unambiguously a noun—fun is the object of the verb have. But then you get constructions like The party was fun. This is structurally ambiguous—both nouns and adjectives can go in the slot after was.

This paves the way to analyze fun as an adjective. It then moved into attributive use, directly modifying a following noun, as in fun fair. Nouns can do this too, so once again the structure was ambiguous, but it was evidence that fun was moving further in the direction of becoming an adjective. In the twentieth century it started to be used in more unambiguously adjectival roles. MWDEU says that this accelerated after World War II, and Mark Davies COHA shows that it especially picked up in the last twenty years.

Once fun was firmly established as an adjective, the inflected forms funner and funnest followed naturally. There are only a handful of hits for either in COCA, which attests to the fact that they’re still fairly new and relatively colloquial. But let’s get back to Altenburg’s post.

She says that fun is defined as a noun and thus can’t be inflected for comparative or superlative forms, but then she admits that dictionaries also define fun as an adjective with the forms funner and funnest. But she waves away these definitions by saying, “However, dictionaries are starting to include more definitions for slang that are still not words to the true copyeditor.”

What this means is that she really isn’t objecting to funner on grammatical grounds (at least not in the technical sense); her argument simply reduces to an assertion that funner isn’t a word. But as Stan Carey so excellently argued, “‘Not a word’ is not an argument”. And even the grammatical objections are eroding; many people now simply assert that funner is wrong, even if they accept fun as an adjective, as Grammar Girl says here:

Yet, even people who accept that “fun” is an adjective are unlikely to embrace “funner” and “funnest.” It seems as if language mavens haven’t truly gotten over their irritation that “fun” has become an adjective, and they’ve decided to dig in their heels against “funner” and “funnest.”

It brings to mind the objection against sentential hopefully. Even though there’s nothing wrong with sentence adverbs or with hopefully per se, it was a new usage that drew the ire of the mavens. The grammatical argument against it was essentially a post hoc justification for a ban on a word they didn’t like.

The same thing has happened with funner. It’s perfectly grammatical in the sense that it’s a well-formed, meaningful word, but it’s fairly new and still highly informal and colloquial. (For the record, it’s not slang, either, but that’s a post for another day.) If you don’t want to use it, that’s your right, but stop saying that it’s not a word.


Relative What

A few months ago Braden asked in a comment about the history of what as a relative pronoun. (For my previous posts on relative pronouns, see here.) The history of relative pronouns in English is rather complicated, and the system as a whole is still in flux, partly because modern English essentially has two overlapping systems of relativization.

In Old English, there were a few different ways to create a relative pronoun, as this site explains. One way was to use the indeclinable particle þe, another was to use a form of the demonstrative pronoun (roughly equivalent to modern English that/those), and another was to use a demonstrative or personal pronoun followed by þe. Our modern relative that grew out of the use of demonstrative pronouns, though unlike the Old English demonstratives, that does not decline for gender, number, and case.

In the late Old English and Middle English periods, writers and speakers began to use interrogative pronouns as relative pronouns by analogy with French and Latin. It first appeared in texts that were translations from Latin around 1000 AD, but within a couple of centuries it had apparently been naturalized. Other interrogatives became pressed into service as relatives during this time, including who, which, where, when, why, and how. All of these are still in common use in Standard English except for what.

It’s important to note that what is still used as a nominal relative, which means that it does not modify another noun phrase but stands in for a noun phrase and a relative simultaneously, as in We fear what we don’t understand. This could be rephrased as We fear that which we don’t understand or We fear the things that we don’t understand, revealing the nominal and the relative.

But while all the other interrogatives have continued as relatives in Standard English, what as a simple relative pronoun is nonstandard today. Simple relative what is found in the works of Shakespeare and the King James Bible, but at some point in the last three or four centuries it fell out of use in the standard dialect. Unfortunately, I’m not really sure when this happened; the Oxford English Dictionary has citations up through 1740 and then one from 1920 that appears to be dialogue from a novel. Merriam-Webster’s Dictionary of English Usage says that in the US, it’s mainly found in rural areas in the Midland and South. As I told Braden in a response to his comment, I’ve heard it used myself. A couple of months ago I heard a man in church pray for “our leaders what guides and directs us”—not just a beautiful example of relative what, but also an interesting example of nonstandard verb agreement.

As for why simple relative what died out in Standard English, I really have no idea. Jonathan Hope noted that it’s rather unusual of Standard English to allow other interrogatives as relatives but not this one.1Jonathan Hope, “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000). In some ways, relative what would make more sense than relative which, since what is historically part of the same paradigm as who; what comes from the neuter form of the interrogative or indefinite pronoun in Old English, while who comes from the combined masculine/feminine form, as shown here. And as I said in this post, whose was originally the genitive form for both who and what, so allowing simple relative what would make for a rather tidy paradigm.

Perhaps that’s the problem. Hope and other have argued that standardized languages—or perhaps speakers of standardized languages—tend to resist tidy paradigms. Irregularities creep in and are preserved, and they can be surprisingly resistant to change. Maybe someone reading this has a fuller explanation of just how this particular little wrinkle came to be.

Notes   [ + ]

1. Jonathan Hope, “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000).


Whose Pronoun Is That?

In my last post I touched on the fact that whose as a relative possessive adjective referring to inanimate objects feels a little strange to some people. In a submission for the topic suggestion contest, Jake asked about the use of that with animate referents (“The woman that was in the car”) and then said, “On the flip side, consider ‘the couch, whose cushion is blue.’ ‘Who’ is usually used for animate subjects. Why don’t we have the word ‘whichs’ for inanimate ones?”

Merriam-Webster’s Dictionary of English Usage (one of my favorite books on language; if you don’t already own it, you should buy it now—seriously.) says that it has been in use from the fourteenth century to the present but that it wasn’t until the eighteenth century that grammarians like Bishop Lowth (surprise, surprise) started to cast aspersions on its use.

MWDEU concludes that “the notion that whose may not properly be used of anything except persons is a superstition; it has been used by innumerable standard authors from Wycliffe to Updike, and is entirely standard as an alternative to of which the in all varieties of discourse.” Bryan A. Garner, in his Garner’s Modern American Usage, says somewhat more equivocally, “Whose may usefully refer to things ⟨an idea whose time has come⟩. This use of whose, formerly decried by some 19th-century grammarians and their predecessors, is often an inescapable way of avoiding clumsiness.” He ranks it a 5—“universally adopted except for a few eccentrics”—but his tone leaves one feeling as if he thinks it the lesser of two evils.

But how did we end up in this situation in the first place? Why don’t we have a whiches or thats or something equivalent? MWDEU notes that “English is not blessed with a genitive form for that or which“, but to understand why, you have to go back to Old English and the loss of the case system in Early Middle English.

First of all, Old English did not use interrogative pronouns (who, which, or what) as relative pronouns. It either used demonstrative pronouns—whence our modern that is descended—or the invariable complementizer þe, which we’ll ignore for now. The demonstrative pronouns declined for gender, number, and case, just like the demonstrative and relative pronouns of modern German. The important point is that in Old English, the relative pronouns looked like this:

Case Masculine Neuter Feminine Plural
Nominative se þæt sēo þā
Accusative þone þæt þā þā
Genitive þæs þæs þǣre þāra, þǣra
Dative þǣm þǣm þǣre þǣm, þām
Instrumental þȳ, þon þȳ, þon

(Taken from The þ is a thorn, which represents a “th” sound.)

As the Old English case system disappeared, this all reduced to the familiar that, which you can see comes from the neuter nominative/accusative form. The genitive, or possessive, form was lost. And in Middle English, speakers began to use interrogative pronouns as relatives, probably under the influence of French. Here’s what the Old English interrogative pronouns looked like:

Case Masculine/Feminine Neuter Plural
Nominative hwā hwæt hwā/hwæt
Accusative hwone hwæt hwone/hwæt
Genitive hwæs hwæs hwæs
Dative hwǣm hwǣm hwǣm
Instrumental hwȳ hwȳ hwǣm

(Wikipedia didn’t have an article or section on Old English interrogative pronouns, so I borrowed the forms from Wikibooks.)

On the masculine/feminine side, we get the ancestors of our modern who/whom/whose (hwā/hwǣm/hwæs), and on the neuter side, we get the ancestor of what (hwæt). Notice that the genitive forms for the two are the same—that is, although we think of whose being the possessive form of who, it’s historically also the possessive form of what.

But we don’t use what as a relative pronoun (well, some dialects do, but Standard English doesn’t); we use which instead. Which also had the full paradigm of case endings just like who/what that. But rather than bore you with more tables full of weird-looking characters, I’ll cut to the chase: which originally had a genitive form, but it too was lost when the Old English case system disappeared.

So of all the demonstrative and interrogative pronouns in English, only one survived with its own genitive form, who. (I don’t know why who hung on to its case forms while the others lost theirs; maybe that’s a topic for another day.) Speakers quite naturally used whose to fill that gap—and keep in mind that it was originally the genitive form of both the animate and inanimate forms of the interrogative pronoun, so English speakers originally didn’t have any qualms about employing it with inanimate relative pronouns, either.

But what does that mean for us today? Well, on the one hand, you can argue that whose as an inanimate relative possessive adjective has a long, well-established history. It’s been used by the best writers for centuries, so there’s no question that it’s standard. But on the other hand, this ignores the fact that some people think there’s something not quite right about it. After all, we don’t use whose as a possessive form of which or that in their interrogative or demonstrative functions. And although it has a long pedigree, another inanimate possessive with a long pedigree fell out of use and was replaced.

His was originally the possessive form of both he and it, but neuter his started to fall out of use and be replaced by a new form its in the sixteenth century. After English lost grammatical gender, people began to use he and she only for people and other animate things and it only for inanimate things. They started to feel a little uncomfortable using the original possessive form of it, his, with inanimate things, so they fashioned a new possessive, its, to replace it.

In other words, there’s precedence for disfavoring inanimate whose and using another word or construction instead. Unfortunately, now thats or whiches will never get off the ground, because they’ll be so heavily stigmatized as nonstandard forms. There are two different impulses fighting one another here: the impulse to have a full and symmetrical paradigm and the impulse to avoid using animate pronouns for inanimate things. Only time will tell which one wins out. For now, I’d say it’s good to remember that inanimate whose is frequently used by good writers and that there’s nothing wrong with it per se. In your own writing, just trust your ear.


10:30 o’clock

My sister-in-law will soon graduate from high school, and we recently got her graduation announcement in the mail. It was pretty standard stuff—a script font in metallic ink on nice paper—but one small detail caught my eye. It says the commencement exercises will take place at “ten-thirty o’clock.” As far as I can remember, I’ve never before heard a rule against using “o’clock” with times other than the hour, but it struck me as wrong.

I checked Merriam-Webster first, but it was no help; all it says is “according to the clock,” though its example sentence is “the time is three o’clock.” I then pulled out my copy of Merriam-Webster’s Dictionary of English Usage, but it didn’t even have an entry for o’clock or clock. So then, because my wife was on the computer and I couldn’t access the OED online, I pulled out my compact OED and magnifying glass to see if it had anything to say.

Once I had flipped to the entry and scanned through the minuscule type, I found this one line: “The hour of the day is expressed by a cardinal numeral, followed by a phrase which was originally of the clock, now only retained in formal phraseology; shortened subsequently to . . . o’clock.” The citations begin with Chaucer and continue up to modern English.

And then, out of curiosity, I checked the Corpus of Contemporary American English, but I couldn’t find any examples of x:30 o’clock. Google, however, turned up plenty of examples, including a thread on Amazon’s Askville asking why you can’t say “11:30 o’clock.” The best explanation there seems to be that since the clock hands aren’t pointing at a specific hour, it can’t be anything-o’clock.

This answer doesn’t seem quite satisfying to me—it doesn’t explain why the hour hand has to be pointing directly at a number or why the minute hand doesn’t matter. But then I remembered that clock originally meant “bell” and that early clocks chimed on the hour (well, I suppose some modern clocks do too, but you see where I’m going). Early mechanical clocks were rather large, and most people measured time not by checking the clock face to see where the hands were, but by counting the number of chimes on the hour. So I would assume that this is why it sounds strange to use “o’clock” with fractions of hours. Thoughts, anyone?

%d bloggers like this: