Arrant Pedantry

By

Over Has Always Meant More Than. Get Over it.

Last month, at the yearly conference of the American Copy Editors Society, the editors of the AP Stylebook announced that over in the sense of more than was now acceptable. For decades, newspaper copy editors had been changing constructions like over three hundred people to more than three hundred people; now, with a word from AP’s top editors, that rule was being abandoned.

According to Merriam-Webster editor Peter Sokolowski, who was in attendance, the announcement was met with gasps. Editors quickly took to Twitter and to blogs to express their approval or dismay. Some saw it as part of the dumbing-down of the language or as a tacit admission that newspapers no longer have the resources to maintain their standards. Others saw it as the banishment of a baseless superstition that has wasted copy editors’ time without improving the text.

The argument had been that over must refer to spatial relationships and that numerical relationships must use more than. But nobody objects to other figurative uses of over, such as over the weekend or get over it or in over your head or what’s come over you? The rule forbidding the use of over to mean more than was first codified in the 1800s, but over can be found in this sense going back a thousand years or more, in some of the earliest documents written in English.

Not only that, but parallel uses can be found in other Germanic languages, including German, Dutch, and Swedish. (Despite all its borrowings from French, Latin, and elsewhere, English is considered a Germanic language.) There’s nothing wrong with the German Kinder über 14 Jahre (children over 14 years) (to borrow an example from the Collins German-English Dictionary) or the Swedish Över femhundra kom (over five hundred came). This means that this use of over actually predates English and must have been inherited from the common ancestor of all the Germanic languages, Proto-Germanic, some two thousand years ago.

Mignon Fogarty, aka Grammar Girl, wrote that “no rationale exists for the ‘over can’t mean more than’ rule.” And in a post on the Merriam-Webster Unabridged blog, Sokolowski gave his own debunking, concluding that “we just don’t need artificial rules that do not promote the goal of clarity.” But none of this was good enough for some people. AP’s announcement caused a rift in the editing staff at Mashable, who debated the rule on the lifestyle blog.

Alex Hazlett argued that the rule “was an arbitrary style decision that had nothing to do with grammar, defensible only by that rationale of last resort: tradition.” Megan Hess, though, took an emotional and hyperbolic tack, claiming that following rules like this prevents the world from slipping into “a Lord of the Flies-esque dystopia.” From there her argument quickly becomes circular: “The distinction is one that distinguishes clean, precise language and attention to detail — and serves as a hallmark of a proper journalism training.” In other words, editors should follow the rule because they’ve been trained to follow the rule, and the rule is simply a mark of clean copy. And how do you know the copy is clean? Because it follows rules like this. As Sokolowski says, this is nothing more than a shibboleth—the distinction serves no purpose other than to distinguish those in the know from everyone else.

It’s also a perfect example of a mumpsimus. The story goes that an illiterate priest in the Middle Ages had learned to recite the Latin Eucharist wrong: instead of sumpsimus (Latin for “we have taken”), he said mumpsimus, which is not a Latin word at all. When someone finally told him that he’d been saying it wrong and that it should be sumpsimus, he responded that he would not trade his old mumpsimus for this person’s new sumpsimus. He didn’t just refuse to change—he refused to recognize that he was wrong and had always been wrong.

But so what if everyone’s been using over this way for longer than the English language has existed? Just because everyone does it doesn’t mean it’s right, right? Well, technically, yes, but let’s flip the question around: what makes it wrong to use over to mean more than? The fact that the over-haters have had such an emotional reaction is telling. It’s surprisingly easy to talk yourself into hating a particular word or phrase and to start judging everyone who allegedly misuses it. And once you’ve developed a visceral reaction to a perceived misuse, it’s hard to be persuaded that your feelings aren’t justified.

We editors take a lot of pride in our attention to language—which usually means our attention to the usage and grammar rules that we’ve been taught—so it can seem like a personal affront to be told that we were wrong and have always been wrong. Not only that, but it can shake our faith in other rules. If we were wrong about this, what else might we have been wrong about? But perhaps rather than priding ourselves on following the rules, we should pride ourselves on mastering them, which means learning how to tell the good rules from the bad.

Learning that you were wrong simply means that now you’re right, and that can only be a good thing.

Update: Parallel uses can also be found in cognates of over in other Indo-European languages. For instance, the Latin super could mean both “above” and “more than”, and so could the Ancient Greek ὑπέρ, or hyper. It’s possible that the development of sense from “above” to “more than” happened independently in Latin, Ancient Greek, and Proto-Germanic, but at the very least we can say that this sort of metaphorical extension of sense is very common and very old. There are no logical grounds for objecting to it.

I also created this helpful (and slightly snarky) timeline of the usage of over and its etymons in English and its ancestor languages.

overmorethan

By

Distinctions, Useful and Otherwise

In a recent New York Times video interview, Steven Pinker touched on the topic of language change, saying, “I think that we do sometimes lose distinctions that it would be nice to preserve—disinterested to mean ‘impartial’ as opposed to ‘bored’, for example.”

He goes on to make the point that language does not degenerate, because it constantly replenishes itself—a point which I agree with—but that line caught the attention of Merriam-Webster’s Peter Sokolowski, who said, “It’s a useful distinction, but why pick a problematic example?” I responded, “I find it ironic that such a useful distinction is so rarely used. And its instability undermines the claims of usefulness.”

What Mr. Sokolowski was alluding to was the fact that the history of disinterested is more complicated than the simple laments over its loss would indicate. If you’re unfamiliar with the usage controversy, it goes something like this: disinterested originally meant ‘impartial’ or ‘unbiased’, and uninterested originally meant ‘bored’, but now people have used disinterested to mean ‘bored’ so much that you can’t use it anymore, because too many people will misunderstand you. It’s an appealing story that encapsulates prescriptivists’ struggle to maintain important aspects of the language in the face of encroaching decay. Too bad it’s not really true.

I won’t dive too deeply into the history of the two words—the always-excellent Merriam-Webster’s Dictionary of English Usage spends over two pages on the topic, revealing a surprisingly complex history—but suffice it to say that disinterested is, as Peter Sokolowski mildly put it, “a problematic example”. The first definition the OED gives for disinterested is “Without interest or concern; not interested, unconcerned. (Often regarded as a loose use.)” The first citation dates to about 1631. The second definition (the correct one, according to traditionalists) is “Not influenced by interest; impartial, unbiased, unprejudiced; now always, Unbiased by personal interest; free from self-seeking. (Of persons, or their dispositions, actions, etc.)” Its first citation, however, is from 1659. And uninterested was originally used in the “impartial” or “unbiased” senses now attributed to disinterested, though those uses are obsolete.

It’s clear from the OED’s citations that both meanings have existed side by side from the 1600s. So there’s not so much a present confusion of the two words as a continuing, three-and-a-half-century-long confusion. And for good reason, too. The positive form interested is the opposite of both disinterested and uninterested, and yet nobody complains that we can’t use it because readers won’t be sure whether we mean “having the attention engaged” or “being affected or involved”, to borrow the Merriam-Webster definitions. If we can use interested to mean two different things, why do we need two different words to refer to the opposite of those things?

And as my advisor, Don Chapman, has written, “When gauging the usefulness of a distinction, we need to keep track of two questions: 1) is it really a distinction, or how easy is the distinction to grasp; 2) is it actually useful, or how often do speakers really use the distinction.”1Don Chapman, “Bad Ideas in the History of English Usage,” in Studies in the History of the English Language 5, Variation and Change in English Grammar and Lexicon: Contemporary Approaches, ed. Robert A. Cloutier, Anne Marie Hamilton-Brehm, William A. Kretzschmar Jr. (New York: Walter de Gruyter, 2010), 151 Chapman adds that “often the claim that a distinction is useful seems to rest on little more than this: if the prescriber can state a clear distinction, the distinction is considered to be desirable ipso facto.” He then asks, “But how easy is the distinction to maintain in actual usage?” (151).

From the OED citations, it’s clear that speakers have never been able to fully distinguish between the two words. Chapman also pointed out to me that the two prefixes in question, dis- and un-, do not clearly indicate one meaning or the other. The meanings of the two words comes from different meanings of the root interested, not the prefixes, so the assignment of meaning to form is arbitrary and must simply be memorized, which makes the distinction difficult for many people to learn and maintain. And even those who do learn the distinction do not employ it very frequently. I know this is anecdotal, but it seems to me that disinterested is far more often mentioned than it is used. I can’t remember the last time I spotted a genuine use of disinterested in the wild.

I think it’s time we dispel the myth that disinterested and uninterested epitomize a lost battle to preserve useful distinctions. The current controversy over its use is not indicative of current laxness or confusion, because there was never a time when people managed to fully distinguish between the two words. If anything, disinterested epitomizes the prescriptivist tendency to elegize the usage wars. The typical discussion of disinterested is often light on historical facts and heavy on wistful sighs over how we can no longer use a word that was perhaps never as useful as we would like to think it was.

Notes   [ + ]

1. Don Chapman, “Bad Ideas in the History of English Usage,” in Studies in the History of the English Language 5, Variation and Change in English Grammar and Lexicon: Contemporary Approaches, ed. Robert A. Cloutier, Anne Marie Hamilton-Brehm, William A. Kretzschmar Jr. (New York: Walter de Gruyter, 2010), 151
%d bloggers like this: