Arrant Pedantry


To Boldly Split Infinitives

Today is the fiftieth anniversary of the first airing of Star Trek, so I thought it was a good opportunity to talk about split infinitives. (So did Merriam-Webster, which beat me to the punch.) If you’re unfamiliar with split infinitives or have thankfully managed to forget what they are since your high school days, it’s when you put some sort of modifier between the to and the infinitive verb itself—that is, a verb that is not inflected for tense, like be or go—and for many years it was considered verboten.

Kirk’s opening monologue on the show famously featured the split infinitive “to boldly go”, and it’s hard to imagine the phrase working so well without it. “To go boldly” and “boldly to go” both sound terribly clunky, partly because they ruin the rhythm of the phrase. “To BOLDly GO” is a nice iambic bimeter, meaning that it has two metrical feet, each consisting of an unstressed syllable followed by a stressed syllable—duh-DUN duh-DUN. “BOLDly to GO” is a trochee followed by an iamb, meaning that we have a stressed syllable, two unstressed syllables, and then another stressed syllable—DUN-duh duh-DUN. “To GO BOLDly” is the reverse, an iamb followed by a trochee, leading to a stress clash in the middle where the two stresses butt up against each other and then ending on a weaker unstressed syllable. Blech.

But the root of the alleged problem with split infinitives concerns not meter but syntax. The question is where it’s syntactically permissible to put a modifier in a to-infinitive phrase. Normally, an adverb would go just in front of the verb it modifies, as in She boldly goes or He will boldly go. Things were a little different when the verb was an infinitive form preceded by to. In this case the adverb often went in front of the to, not in front of the verb itself.

As Merriam-Webster’s post notes, split infinitives date back at least to the fourteenth century, though they were not as common back then and were often used in different ways than they are today. But they mostly fell out of use in the sixteenth century and then roared back to life in the eighteenth century, only to be condemned by usage commentators in the nineteenth and twentieth centuries. (Incidentally, this illustrates a common pattern of prescriptivist complaints: a new usage arises, or perhaps it has existed for literally millennia, it goes unnoticed for decades or even centuries, someone finally notices it and decides they don’t like it (often because they don’t understand it), and suddenly everyone starts decrying this terrible new thing that’s ruining English.)

It’s not particularly clear, though, why people thought that this particular thing was ruining English. The older boldly to go was replaced by the resurgent to boldly go. It’s often claimed that people objected to split infinitives on the basis of analogy with Latin (Merriam-Webster’s post repeats this claim). In Latin, an infinitive is a single word, like ire, and it can’t be split. Ergo, since you can’t split infinitives in Latin, you shouldn’t be able to split them in English either. The problem with this theory is that there’s no evidence to support it. Here’s the earliest recorded criticism of the split infinitive, according to Wikipedia:

The practice of separating the prefix of the infinitive mode from the verb, by the intervention of an adverb, is not unfrequent among uneducated persons. . . . I am not conscious, that any rule has been heretofore given in relation to this point. . . . The practice, however, of not separating the particle from its verb, is so general and uniform among good authors, and the exceptions are so rare, that the rule which I am about to propose will, I believe, prove to be as accurate as most rules, and may be found beneficial to inexperienced writers. It is this :—The particle, TO, which comes before the verb in the infinitive mode, must not be separated from it by the intervention of an adverb or any other word or phrase; but the adverb should immediately precede the particle, or immediately follow the verb.

No mention of Latin or of the supposed unsplittability of infinitives. In fact, the only real argument is that uneducated people split infinitives, while good authors didn’t. Some modern usage commentators have used this purported Latin origin of the rule as the basis of a straw-man argument: Latin couldn’t split infinitives, but English isn’t Latin, so the rule isn’t valid. Unfortunately, Merriam-Webster’s post does the same thing:

The rule against splitting the infinitive comes, as do many of our more irrational rules, from a desire to more rigidly adhere (or, if you prefer, “to adhere more rigidly”) to the structure of Latin. As in Old English, Latin infinitives are written as single words: there are no split infinitives, because a single word is difficult to split. Some linguistic commenters have pointed out that English isn’t splitting its infinitives, since the word to is not actually a part of the infinitive, but merely an appurtenance of it.

The problem with this argument (aside from the fact that the rule wasn’t based on Latin) is that modern English infinitives—not just Old English infinitives—are only one word too and can’t be split either. The infinitive in to boldly go is just go, and go certainly can’t be split. So this line of argument misses the point: the question isn’t whether the infinitive verb, which is a single word, can be split in half, but whether an adverb can be placed between to and the verb. As Merriam-Webster’s Dictionary of English Usage notes, the term split infinitive is a misnomer, since it’s not really the infinitive but the construction containing an infinitive that’s being split.

But in recent years I’ve seen some people take this terminological argument even further, saying that split infinitives don’t even exist because English infinitives can’t be split. I think this is silly. Of course they exist. It used to be that people would say boldly to go; then they started saying to boldly go instead. It doesn’t matter what you call the phenomenon of moving the adverb so that it’s snug up against the verb—it’s still a phenomenon. As Arnold Zwicky likes to say, “Labels are not definitions.” Just because the name doesn’t accurately describe the phenomenon doesn’t mean it doesn’t exist. We could call this phenomenon Steve, and it wouldn’t change what it is.

At this point, the most noteworthy thing about the split infinitive is that there are still some people who think there’s something wrong with it. The original objection was that it was wrong because uneducated people used it and good writers didn’t, but that hasn’t been true in decades. Most usage commentators have long since given up their objections to it, and some even point out that avoiding a split infinitive can cause awkwardness or even ambiguity. In his book The Sense of Style, Steven Pinker gives the example The board voted immediately to approve the casino. Which word does immediately modify—voted or approve?

But this hasn’t stopped The Economist from maintaining its opposition to split infinitives. Its style guide says, “Happy the man who has never been told that it is wrong to split an infinitive: the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it.”

I call BS on this. Most usage commentators have moved on, and I suspect that most laypeople either don’t know or don’t care what a split infinitive is. I don’t think I know a single copy editor who’s bothered by them. If you’ve been worrying about splitting infinitives since your high school English teacher beat the fear of them into you, it’s time to let it go. If they’re good enough for Star Trek, they’re good enough for you too.

But just for fun, let’s do a little poll:

Do you find split infinitives annoying?

View Results

Loading ... Loading ...


Book Review: The Sense of Style

Full disclosure: I received an advance review copy of this book from the publisher, Viking.

The Sense of StyleI was intrigued when I first heard that Steven Pinker, the linguist and cognitive scientist, was writing a book on style. I’ve really enjoyed some of his other books, such as The Stuff of Thought, but wasn’t this the guy who had dedicated an entire chapter of The Language Instinct to bashing prescriptivists, calling them a bunch of “kibbitzers and nudniks” who peddle “bits of folklore that originated for screwball reasons several hundred years ago”? But even though it can be satisfying to bash nonsensical grammar rules, I’ve also long felt that linguists could offer some valuable insight into the field of writing. I was hopeful that Pinker would have some interesting things to say about writing, and he didn’t disappoint me.

I should be clear, though, that this is not your ordinary book on writing advice. It isn’t a quick reference book full of rules and examples of what to do and what not to do (for which I recommend Joseph Williams’s excellent Style). It’s something deeper and more substantial than that—it’s a thorough examination of what makes good writing good and why writing well is so hard.

Pinker starts by reverse-engineering some of his favorite passages of prose, taking them apart piece by piece to see what makes them tick. Though it’s an interesting exercise, it gets a little tedious at times as he picks passages apart. However, his point is valuable: good writing can only come from good reading, which means not only reading a lot but engaging with what you read.

He then explores classic style, which he calls “an antidote for academese, bureaucratese, corporatese, legalese, officialese, and other kinds of stuffy prose.” Classic style starts with the assumption that the writer has seen something that they want to show to the reader, so the writer engages in a conversation with the reader to help direct their gaze. It’s not suitable for every kind of writing—for example, a user manual needs just a straightforward list of instructions, not a dialogue—but it works well for academic writing and other kinds of writing in which an author explains a new idea to the reader.

Then Pinker tackles perhaps the most difficult challenge in writing—overcoming the curse of knowledge. The cause of much bad writing, he says, is that the author is so close to the subject that they don’t know how to explain it to someone who doesn’t already know what the author knows. They forget how they came by their knowledge and thus unthinkingly skip key pieces of explanation or use jargon that is obscure or opaque to outsiders. And to make things worse, even being aware of the curse of knowledge isn’t enough to ensure that you’ll write more clearly; that is, you can’t simply tell someone, “Keep the reader in mind!” and expect them to do so. The best solution, Pinker says, is to have test readers or editors who can tell you where something doesn’t make sense and needs to be revised.

The next chapters provide a crash course on syntax and a guide to creating greater textual coherence, and though they occasionally get bogged down in technical details, they’re full of good advice. For example, Pinker uses syntax tree diagrams to illustrate both the cause of and solution to problems like misplaced modifiers. Tree diagrams are much more intuitive than other diagramming methods like Reed-kellog, so you don’t need to be an expert in linguistics to see the differences between two example sentences. And though the guide to syntax is helpful, the chapter on coherence is even better. Pinker explains why seemingly well-written text is sometimes so hard to understand: because even though the sentences appear to hang together just fine, the ideas don’t. The solution is to keep consistent thematic strings throughout a piece, tying ideas together and making the connections between them clear.

The last and by far the longest chapter—it occupies over a third of the book—is essentially a miniature grammar and usage guide prefaced by a primer on the supposed clash between prescriptivism and descriptivism. It’s simultaneously the most interesting and most disappointing chapter in the book. Though it starts rather admirably by explaining the linguistics behind particular usage issues (something I try to do on this blog), it ends with Pinker indulging in some peevery himself. Ironically, some of the usage rules he endorses are no more valid than the ones he debunks, and he gives little justification for his preference, often simply stating that one form is classier. At least it’s clear, though, that these are his personal preferences and not universal laws. The bulk of the chapter, though, is a lucid guide to some common grammar and usage issues. (And yes, he does get in a little prescriptivist bashing.)

Despite some occasional missteps, The Sense of Style is full of valuable advice and is a welcome addition to the genre of writing guides.


What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella”, the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot”, “A Bad Week for Joan Acocella”, “Daddy, Are Prescriptivists Real?”, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.


Distinctions, Useful and Otherwise

In a recent New York Times video interview, Steven Pinker touched on the topic of language change, saying, “I think that we do sometimes lose distinctions that it would be nice to preserve—disinterested to mean ‘impartial’ as opposed to ‘bored’, for example.”

He goes on to make the point that language does not degenerate, because it constantly replenishes itself—a point which I agree with—but that line caught the attention of Merriam-Webster’s Peter Sokolowski, who said, “It’s a useful distinction, but why pick a problematic example?” I responded, “I find it ironic that such a useful distinction is so rarely used. And its instability undermines the claims of usefulness.”

What Mr. Sokolowski was alluding to was the fact that the history of disinterested is more complicated than the simple laments over its loss would indicate. If you’re unfamiliar with the usage controversy, it goes something like this: disinterested originally meant ‘impartial’ or ‘unbiased’, and uninterested originally meant ‘bored’, but now people have used disinterested to mean ‘bored’ so much that you can’t use it anymore, because too many people will misunderstand you. It’s an appealing story that encapsulates prescriptivists’ struggle to maintain important aspects of the language in the face of encroaching decay. Too bad it’s not really true.

I won’t dive too deeply into the history of the two words—the always-excellent Merriam-Webster’s Dictionary of English Usage spends over two pages on the topic, revealing a surprisingly complex history—but suffice it to say that disinterested is, as Peter Sokolowski mildly put it, “a problematic example”. The first definition the OED gives for disinterested is “Without interest or concern; not interested, unconcerned. (Often regarded as a loose use.)” The first citation dates to about 1631. The second definition (the correct one, according to traditionalists) is “Not influenced by interest; impartial, unbiased, unprejudiced; now always, Unbiased by personal interest; free from self-seeking. (Of persons, or their dispositions, actions, etc.)” Its first citation, however, is from 1659. And uninterested was originally used in the “impartial” or “unbiased” senses now attributed to disinterested, though those uses are obsolete.

It’s clear from the OED’s citations that both meanings have existed side by side from the 1600s. So there’s not so much a present confusion of the two words as a continuing, three-and-a-half-century-long confusion. And for good reason, too. The positive form interested is the opposite of both disinterested and uninterested, and yet nobody complains that we can’t use it because readers won’t be sure whether we mean “having the attention engaged” or “being affected or involved”, to borrow the Merriam-Webster definitions. If we can use interested to mean two different things, why do we need two different words to refer to the opposite of those things?

And as my advisor, Don Chapman, has written, “When gauging the usefulness of a distinction, we need to keep track of two questions: 1) is it really a distinction, or how easy is the distinction to grasp; 2) is it actually useful, or how often do speakers really use the distinction.”1Don Chapman, “Bad Ideas in the History of English Usage,” in Studies in the History of the English Language 5, Variation and Change in English Grammar and Lexicon: Contemporary Approaches, ed. Robert A. Cloutier, Anne Marie Hamilton-Brehm, William A. Kretzschmar Jr. (New York: Walter de Gruyter, 2010), 151 Chapman adds that “often the claim that a distinction is useful seems to rest on little more than this: if the prescriber can state a clear distinction, the distinction is considered to be desirable ipso facto.” He then asks, “But how easy is the distinction to maintain in actual usage?” (151).

From the OED citations, it’s clear that speakers have never been able to fully distinguish between the two words. Chapman also pointed out to me that the two prefixes in question, dis- and un-, do not clearly indicate one meaning or the other. The meanings of the two words comes from different meanings of the root interested, not the prefixes, so the assignment of meaning to form is arbitrary and must simply be memorized, which makes the distinction difficult for many people to learn and maintain. And even those who do learn the distinction do not employ it very frequently. I know this is anecdotal, but it seems to me that disinterested is far more often mentioned than it is used. I can’t remember the last time I spotted a genuine use of disinterested in the wild.

I think it’s time we dispel the myth that disinterested and uninterested epitomize a lost battle to preserve useful distinctions. The current controversy over its use is not indicative of current laxness or confusion, because there was never a time when people managed to fully distinguish between the two words. If anything, disinterested epitomizes the prescriptivist tendency to elegize the usage wars. The typical discussion of disinterested is often light on historical facts and heavy on wistful sighs over how we can no longer use a word that was perhaps never as useful as we would like to think it was.

Notes   [ + ]

1. Don Chapman, “Bad Ideas in the History of English Usage,” in Studies in the History of the English Language 5, Variation and Change in English Grammar and Lexicon: Contemporary Approaches, ed. Robert A. Cloutier, Anne Marie Hamilton-Brehm, William A. Kretzschmar Jr. (New York: Walter de Gruyter, 2010), 151


They and the Gender-Neutral Pronoun Dilemma

A few weeks ago, as a submission for my topic contest, Bob Scopatz suggested I tackle the issue of gender-neutral pronouns in English. In his comment he said, “I dislike alternating between ‘he’ and ‘she’. I despise all variants of ‘he/she’, ‘s/he’, etc. I know that I should not use ‘they’, but it feels closest to what I really want. Could you maybe give us the latest on this topic and tell me if there is any hope for a consensus usage in my lifetime?” It must be a timely topic, because I’ve read three different articles and watched a video on it in the past week.

The first was Allan Metcalf’s article at Lingua Franca on failed attempts to fill gaps in the language. He says that the need for a gender-neutral pronoun is a gap that has existed for centuries, defying attempts to fill it with neologisms. He notes almost in passing that they is another option but that “filling a singular gap with a plural doesn’t satisfy” every one.

The next was June Casagrande’s article in the Burbank Leader. She gives the subject a little more attention, discussing the awkwardness of using “he or she” or “him or her” every time and the rising acceptance of the so-called singular they. But then, in similar fashion to the it’s-not-wrong-but-you-still-shouldn’t-do-it approach, she says that she won’t judge others who use singular they, but she’s going to hold off on it herself (presumably because she doesn’t want to be judged negatively for it). She also overlooks some historical facts, namely that they has been used this way since Chaucer’s day and that it wasn’t until the end of the eighteenth century that it was declared ungrammatical by Lindley Murray.

That leads to the next article, an interview with Professor Anne Curzan at Visual Thesaurus. She discusses the “almost hypocritical position” of having to grade students’ papers for grammar and usage issues that she doesn’t believe in, like singular they. She tackles the allegation that it’s incorrect because they is plural, saying that in a sentence like “I was talking to a friend of mine, and they said it was a terrible movie”, “they is clearly singular, because it’s referring to a friend.” This probably won’t carry much weight with some people who believe that it’s innately plural and that you can’t just declare it to be singular when it suits you. Ah, but here’s the rub: English speakers did the same thing with plural you in centuries past.

Originally, English had two second-person pronouns, singular thou and plural you. But speakers began to use you as a formal singular pronoun (think French vous, Spanish usted, or German Sie). Then it began to be used in more and more situations, until thou was only used when talking down to someone and then disappeared from the language altogether. Now we have a pronoun that agrees with verbs like a plural but clearly refers to singular entities all the time. If you can do it, why can’t they?

Further, Steven Pinker argues that “everyone and they are not an ‘antecedent’ and a ‘pronoun’ referring to the same person”, but rather that “they are a ‘quantifier’ and a ‘bound variable,’ a different logical relationship.” He says that “Everyone returned to their seats means “For all X, X returned to X’s seat.” In other words, there are logical objections to the logical objections to singular they.

Then there came Emily Brewster’s Ask the Editor video at Merriam-Webster Online. She notes that for the eighteenth-century grammarians who proscribed singular they and prescribed generic he, “inaccuracy of gender was less troublesome than inaccuracy of number.” She then concludes that “all this effort to avoid a usage that’s centuries old strikes some of us as strange” and makes the recommendation, “Perhaps everyone should just do their best in the situations they find themselves in, even if their best involves they as a singular pronoun.”

Rather than join the ranks of grammarians who walk through all the arguments in favor of singular they but then throw their hands up in defeat and tell you to avoid it because it’s not accepted yet, I’m taking a different track and recommending its use. The problem with not using it until it becomes accepted is that it won’t become accepted until enough people—especially people with some authority in the field of usage—use it and say it’s okay to use it. If we sit around waiting for the day when it’s declared to be acceptable, we’ll be waiting a long time. But while there are still people who will decry it as an error, as I’ve said before, you can’t please everyone. And as Bob said in his original comment, they is what many people already use or want to use. I think it’s the best solution for a common problem, and it’s time to stop wringing our hands over it and embrace it.

So, to answer Bob’s question if there will ever be consensus on the issue in our lifetime, I’d say that while there might not be consensus at the moment, I’m hopeful that it will come. I think the tide has already begun to turn as more and more linguists, lexicographers, editors, and writers recommend it as the best solution to a common problem.

%d bloggers like this: