Arrant Pedantry

By

Lynne Truss and Chicken Little

Lynne Truss, author of the bestselling Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation, is at it again, crying with her characteristic hyperbole and lack of perspective that the linguistic sky is falling because she got a minor bump on the head.

As usual, Truss hides behind the it’s-just-a-joke-but-no-seriously defense. She starts by claiming to have “an especially trivial linguistic point to make” but then claims that the English language is doomed, and it’s all linguists’ fault. According to Truss, linguists have sat back and watched while literacy levels have declined—and have profited from doing so.

What exactly is the problem this time? That some people mistakenly write some phrases as compound words when they’re not, such as maybe for may be or anyday for any day. (This isn’t even entirely true; anyday is almost nonexistent in print, even in American English, according to Google Ngram Viewer.) I guess from anyday it’s a short, slippery slope to complete language chaos, and then “we might as well all go off and kill ourselves.”

But it’s not clear what her complaint about erroneous compound words has to do with literacy levels. If the only problem with literacy is that some people write maybe when they mean may be, then it seems to be, as she originally says, an especially trivial point. Yes, some people deviate from standard orthography. While this may be irritating and may occasionally cause confusion, it’s not really an indication that people don’t know how to read or write. Even educated people make mistakes, and this has always been the case. It’s not a sign of impending doom.

But let’s consider the analogies she chose to illustrate linguists’ supposed negligence. She says that we’re like epidemiologists who simply catalog all the ways in which people die from diseases or like architects who make notes while buildings collapse. (Interestingly, she makes two remarks about how well paid linguists are. Of course, professors don’t actually make that much, especially those in the humanities or social sciences. And it smacks of hypocrisy from someone whose book has sold 3 million copies.)

Perhaps there is a minor crisis in literacy, at least in the UK. This article says that 16–24-year-olds in the UK are lagging behind many counterparts in other first-world countries. (The headline suggests that they’re trailing the entire world, but the study only looked at select countries from Europe and east Asia.) Wikipedia, however, says that the UK has a 99 percent literacy rate. Maybe young people are slipping a bit, and this is certainly something that educators should address, but it doesn’t appear that countless people are dying from an epidemic of slightly declining literacy rates or that our linguistic structures are collapsing. This is simply not the linguistic apocalypse that Truss makes it out to be.

Anyway, even if it were, why would it be linguists’ job to do something about it? Literacy is taught in primary and secondary school and is usually the responsibility of reading, language arts, or English teachers—not linguists. Why not criticize English professors for sitting back and collecting fat paychecks for writing about literary theory while our kids struggle to read? Because they’re not her ideological enemy, that’s why. Linguists often oppose language pedants like Truss, and so Truss finds some reason—contrived though it may be—to blame them. Though some applied linguists do in fact study things like language acquisition and literacy, most linguists hew to the more abstract and theoretical side of language—syntax, morphology, phonology, and so on. Blaming descriptive linguists for children’s illiteracy is like blaming physicists for children’s inability to ride bikes.

And maybe the real reason why linguists are unconcerned about the upcoming linguistic apocalypse is that there simply isn’t one. Maybe linguists are like meteorologists who observe that, contrary to the claims of some individuals, the sky is not actually falling. In studying the structure of other languages and the ways in which languages change, linguists have realized that language change is not decay. Consider the opening lines from Beowulf, an Old English epic poem over a thousand years old:

HWÆT, WE GAR-DEna in geardagum,
þeodcyninga þrym gefrunon,
hu ða æþelingas ellen fremedon!

Only two words are instantly recognizable to modern English speakers: we and in. The changes from Old English to modern English haven’t made the language better or worse—just different. Some people maintain that they understand that language changes but say that they still oppose certain changes that seem to come from ignorance or laziness. They fear that if we’re not vigilant in opposing such changes, we’ll lose our ability to communicate. But the truth is that most of those changes from Old English to modern English also came from ignorance or laziness, and we seem to communicate just fine today.

Languages can change very radically over time, but contrary to popular belief, they never devolve into caveman grunting. This is because we all have an interest in both understanding and being understood, and we’re flexible enough to adapt to changes that happen within our lifetime. And with language, as opposed to morality or ethics, there is no inherent right or wrong. Correct language is, in a nutshell, what its users consider to be correct for a given time, place, and audience. One generation’s ignorant change is sometimes the next generation’s proper grammar.

It’s no surprise that Truss fundamentally misunderstands what linguists and lexicographers do. She even admits that she was “seriously unqualified” for linguistic debate a few years back, and it seems that nothing has changed. But that probably won’t stop her from continuing to prophesy the imminent destruction of the English language. Maybe Truss is less like Chicken Little and more like the boy who cried wolf, proclaiming disaster not because she actually sees one coming, but rather because she likes the attention.

By

What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella,” the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot“, “A Bad Week for Joan Acocella“, “Daddy, Are Prescriptivists Real?“, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.

By

Rules, Regularity, and Relative Pronouns

The other day I was thinking about relative pronouns and how they get so much attention from usage commentators, and I decided I should write a post about them. I was beaten to the punch by Stan Carey, but that’s okay, because I think I’m going to take it in a somewhat different direction. (And anyway, great minds think alike, right? But maybe you should read his post first, along with my previous post on who and that, if you haven’t already.)

I’m not just talking about that and which but also who, whom, and whose, which is technically a relative possessive adjective. Judging by how often relative pronouns are talked about, you’d assume that most English speakers can’t get them right, even though they’re among the most common words in the language. In fact, in my own research for my thesis, I’ve found that they’re among the most frequent corrections made by copy editors.

So what gives? Why are they so hard for English speakers to get right? The distinctions are pretty clear-cut and can be found in a great many usage and writing handbooks. Some commentators even judgementally declare, “There’s a useful distinction here, and it’s lazy or perverse to pretend otherwise.” But is it really useful, and is it really lazy and perverse to disagree? Or is it perverse to try to inflict a bunch of arbitrary distinctions on speakers and writers?

And arbitrary they are. Many commentators act as if the proposed distinctions between all these words would make things tidier and more regular, but in fact it makes the whole system much more complicated. On the one hand, we have the restrictive/nonrestrictive distinction between that and which. On the other hand, we have the animate/inanimate (or human/nonhuman, if you want to be really strict) distinction between who and that/which. And on the other other hand, there’s the subject/object distinction between who and whom. But there’s no subject/object distinction with that or which, except when it’s the object of a preposition—then you have to use which, unless the preposition is stranded, in which case you can use that. And on the final hand, some people have proscribed whose as an inanimate or nonhuman relative possessive adjective, recommending constructions with of which instead, though this rule isn’t as popular, or at least not as frequently talked about, as the others. (How many hands is that? I’ve lost count.)

Simple, right? To make it all a little clear, I’ve even put it into a nice little table.

The proposed relative pronoun system

This is, in a nutshell, a very lopsided and unusual system. In a comment on my who/that post, Elaine Chaika says, “No natural grammar rule would work that way. Ever.” I’m not entirely convinced of that, because languages can be surprising in the unusual distinctions they make, but I agree that it is at the least typologically unusual.

“But we have to have rules!” you say. “If we don’t, we’ll have confusion!” But we do have rules—just not the ones that are proposed and promoted. The system we really have, in absence of the prescriptions, is basically a distinction between animate who and inanimate which with that overlaying the two. Which doesn’t make distinctions by case, but who(m) does, though this distinction is moribund and has probably only been kept alive by the efforts of schoolteachers and editors.

Whom is still pretty much required when it immediately follows a preposition, but not when the preposition is stranded. Since preposition stranding is extremely common in speech and increasingly common in writing, we’re seeing less and less of whom in this position. Whose is still a little iffy with inanimate referents, as in The house whose roof blew off, but many people say this is alright. Others prefer of which, though this can be awkward: The house the roof of which blew off.

That is either animate or inanimate—only who/which make that distinction—and can be either subject or object but cannot follow a preposition or function as a possessive adjective or nonrestrictively. If the preposition is stranded, as in The man that I gave the apple to, then it’s still allowed. But there’s no possessive thats, so you have to use whose of of which. Again, it’s clearer in table form:

The natural system of relative pronouns

The linguist Jonathan Hope wrote that several distinguishing features of Standard English give it “a typologically unusual structure, while non-standard English dialects follow the path of linguistic naturalness.” He then muses on the reason for this:

One explanation for this might be that as speakers make the choices that will result in standardisation, they unconsciously tend towards more complex structures, because of their sense of the prestige and difference of formal written language. Standard English would then become a ‘deliberately’ difficult language, constructed, albeit unconsciously, from elements that go against linguistic naturalness, and which would not survive in a ‘natural’ linguistic environment.[1]

It’s always tricky territory when you speculate on people’s unconscious motivations, but I think he’s on to something. Note that while the prescriptions make for a very asymmetrical system, the system that people naturally use is moving towards a very tidy and symmetrical distribution, though there are still a couple of wrinkles that are being worked out.

But the important point is that people already follow rules—just not the ones that some prescriptivists think they should.

  1. [1] “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000), 53.

By

Continua, Planes, and False Dichotomies

On Twitter, Erin Brenner asked, “How about a post on prescriptivism/descriptivism as a continuum rather than two sides? Why does it have to be either/or?” It’s a great question, and I firmly believe that it’s not an either-or choice. However, I don’t actually agree that prescriptivism and descriptivism occupy different points on a continuum, so I hope Erin doesn’t mind if I take this in a somewhat different direction from what she probably expected.

The problem with calling the two part of a continuum is that I don’t believe they’re on the same line. Putting them on a continuum, in my mind, implies that they share a common trait that is expressed to greater or lesser degrees, but the only real trait they share is that they are both approaches to language. But even this is a little deceptive, because one is an approach to studying language, while the other is an approach to using it.

I think the reason why we so often treat it as a continuum is that the more moderate prescriptivists tend to rely more on evidence and less on flat assertions. This makes us think of prescriptivists who don’t employ as much facts and evidence as occupying a point further along the spectrum. But I think this point of view does a disservice to prescriptivism by treating it as the opposite of fact-based descriptivism. This leads us to think that at one end, we have the unbiased facts of the language, and somewhere in the middle we have opinions based on facts, and at the other end, where undiluted prescriptivism lies, we have opinions that contradict facts. I don’t think this model makes sense or is really an accurate representation of prescriptivism, but unfortunately it’s fairly pervasive.

In its most extreme form, we find quotes like this one from Robert Hall, who, in defending the controversial and mostly prescription-free Webster’s Third, wrote: “The functions of grammars and dictionaries is to tell the truth about language. Not what somebody thinks ought to be the truth, nor what somebody wants to ram down somebody else’s throat, not what somebody wants to sell somebody else as being the ‘best’ language, but what people actually do when they talk and write. Anything else is not the truth, but an untruth.”[1]

But I think this is a duplicitous argument, especially for a linguist. If prescriptivism is “what somebody thinks ought to be the truth”, then it doesn’t have a truth value, because it doesn’t express a proposition. And although what is is truth, what somebody thinks should be is not its opposite, untruth.

So if descriptivism and prescriptivism aren’t at different points on a continuum, where are they in relation to each other? Well, first of all, I don’t think pure prescriptivism should be identified with evidence-free assertionism, as Eugene Volokh calls it. Obviously there’s a continuum of practice within prescriptivism, which means it must exist on a separate continuum or axis from descriptivism.

I envision the two occupying a space something like this:

graph of descriptivism and prescriptivism

Descriptivism is concerned with discovering what language is without assigning value judgements. Linguists feel that whether it’s standard or nonstandard, correct or incorrect by traditional standards, language is interesting and should be studied. That is, they try to stay on the right side of the graph, mapping out human language in all its complexity. Some linguists like Hall get caught up in trying to tear down prescriptivism, viewing it as a rival camp that must be destroyed. I think this is unfortunate, because like it or not, prescriptivism is a metalinguistic phenomenon that at the very least is worthy of more serious study.

Prescriptivism, on the other hand, is concerned with good, effective, or proper language. Prescriptivists try to judge what best practice is and formulate rules to map out what’s good or acceptable. In the chapter “Grammar and Usage” in The Chicago Manual of Style, Bryan Garner says his aim is to guide “writers and editors toward the unimpeachable uses of language” (16th ed., 5.219, 15th ed., 5.201).

Reasonable or moderate prescriptivists try to incorporate facts and evidence from actual usage in their prescriptions, meaning that they try to stay in the upper right of the graph. Some prescriptivists stray into untruth territory on the left and become unreasonable prescriptivists, or assertionists. No amount of evidence will sway them; in their minds, certain usages are just wrong. They make arguments from etymology or from overly literal or logical interpretations of meaning. And quite often, they say something’s wrong just because it’s a rule.

So it’s clearly not an either-or choice between descriptivism and prescriptivism. The only thing that’s not really clear, in my mind, is how much of prescriptivism is reliable. That is, do the prescriptions actually map out something we could call “good English”? Quite a lot of the rules serve little purpose beyond serving “as a sign that the writer is unaware of the canons of usage”, to quote the usage entry on hopefully in the American Heritage Dictionary (5th ed.). Linguists have been so preoccupied with trying to debunk or discredit prescriptivism that they’ve never really stopped to investigate whether there’s any value to prescriptivists’ claims. True, there have been a few studies along those lines, but I think they’re just scratching the surface of what could be an interesting avenue of study. But that’s a topic for another time.

  1. [1] In Harold B. Allen et al., “Webster’s Third New International Dictionary: A Symposium,” Quarterly Journal of Speech 48 (December 1962): 434.

By

The Value of Prescriptivism

Last week I asked rather skeptically whether prescriptivism had moral worth. John McIntyre was interested by my question and musing in the last paragraph, and he took up the question (quite admirably, as always) and responded with his own thoughts on prescriptivism. What I see is in his post is neither a coherent principle nor an innately moral argument, as Hart argued, but rather a set of sometimes-contradictory principles mixed with personal taste—and I think that’s okay.

Even Hart’s coherent principle is far from coherent when you break it down. The “clarity, precision, subtlety, nuance, and poetic richness” that he touts are really a bundle of conflicting goals. Clear wording may come at the expense of precision, subtlety, and nuance. Subtlety may not be very clear or precise. And so on. And even if these are all worthy goals, there may be many more that are missing.

McIntyre notes several more goals for practical prescriptivists like editors, including effectiveness, respect for an author’s voice, consistency with a set house style, and consideration of reader reactions, which is a quagmire in its own right. As McIntyre notes, some readers may have fits when they see sentence-disjunct “hopefully”, while other readers may find workarounds like “it is to be hoped that” to be stilted.

Of course, any appeal to the preferences of the reader (which is, in a way, more of a construct than a real entity) still requires decision making: which readers are you appealing to? Many of those who give usage advice seem to defer to the sticklers and pedants, even when it can be shown that they’re pretty clearly wrong or at least holding to outdated and somewhat silly notions. Grammar Girl, for example, guides readers through the arguments for and against “hopefully”, repeatedly saying that she hopes it becomes acceptable someday (note how carefully she avoids using “hopefully” herself, even though she claims to support it) but ultimately shies away from the usage, saying that you should avoid it for now because it’s not acceptable yet. (I’ll write about the strange reasoning presented here some other time.)

But whether or not you give in to the pedants and cranks who write angry letters to lecture you on split infinitives and stranded prepositions, it’s still clear that there’s value in considering the reader’s wishes while writing and editing. The author wants to communicate something to an audience; the audience presumably wants to receive that communication. It’s in both parties’ best interests if that communication goes off without a hitch, which is where prescriptivism can come in.

As McIntyre already said, this doesn’t give you an instant answer to every question, it can give you some methods of gauging roughly how acceptable certain words or constructions are. Ben Yagoda provides his own “somewhat arbitrary metric” for deciding when to fight for a traditional meaning and when to let it go. But the key word here is “arbitrary”; there is no absolute truth in usage, no clear, authoritative source to which you can appeal to solve these questions.

Nevertheless, I believe the prescriptive motivation—the desire to make our language as good as it can be—is, at its core, a healthy one. It leads us to strive for clear and effective communication. It leads us to seek out good language to use as a model. And it slows language change and helps to ensure that writing will be more understandable to audiences that are removed spatially and temporally. But when you try to turn this into a coherent principle to instruct writers on individual points of usage, like transpire or aggravate or enormity, well, then you start running into trouble, because that approach favors fiat over reason and evidence. But I think that an interest in clear and effective language, tempered with a healthy dose of facts and an acknowledgement that the real truth is often messy, can be a boon to all involved.

By

Does Prescriptivism Have Moral Worth?

I probably shouldn’t be getting into this again, but I think David Bentley Hart’s latest post on language (a follow-up to the one I last wrote about) deserves a response. You see, even though he’s no longer cloaking his peeving with the it’s-just-a-joke-but-no-seriously defense, I think he’s still cloaking his arguments in something else: spurious claims about the nature of descriptivism and the rational and moral superiority of prescriptivism. John McIntyre has already taken a crack at these claims, and I think he’s right on: Hart’s description of descriptivists doesn’t match any descriptivists I know, and his claims about prescriptivism’s rational and moral worth are highly suspect.

Hart gets off to bad start when he says that “most of [his convictions] require no defense” and then says that “if you can find a dictionary that, say, allows ‘reluctant’ as a definition of ‘reticent,’ you will also find it was printed in Singapore under the auspices of ‘The Happy Luck Goodly Englishing Council.’” Even when he provides a defense, he’s wrong: the Oxford English Dictionary contains precisely that definition, sense 2: “Reluctant to perform a particular action; hesitant, disinclined. Chiefly with about, or to do something.” The first illustrative quotation is from 1875, only 50 years after the first quote for the traditionally correct definition: “The State registrar was just as reticent to give us information.” So much for the Happy Luck Goodly Englishing Council. (Oh, wait, let me guess—this is just another self-undermining flippancy.)

I’m glad that Hart avoids artificial rules such as the proscription against restrictive which and recognizes that “everyone who cares about such matters engages in both prescription and description, often confusing the two”—a point which many on both sides fail to grasp. But I’m disappointed when he says, “The real question, at the end of the day, is whether any distinction can be recognized, or should be maintained, between creative and destructive mutations,” and then utterly fails to address the question. Instead he merely defends his peeves and denigrates those who argue against his peeves without embracing the disputed senses themselves as hypocrites. But I don’t want to get embroiled in discussions about whether reticent to mean “reluctant” is right or wrong or has a long, noble heritage or is an ignorant vulgarism—that’s all beside the point and doesn’t get to the claims Hart employs to justify his peeves.

But near the end, he does say that his “aesthetic prejudice” is also a “coherent principle” because “persons can mean only what they have the words to say, and so the finer our distinctions and more precise our definitions, the more we are able to mean.” On the surface this may seem like a nice sentiment, but I don’t think it’s nearly as coherent as Hart would like to think. First of all, it smacks of the Whorfian hypothesis, the idea that words give you the power to mean things that you couldn’t otherwise mean. I’m fairly confident I could mean “disinclined to speak” even if the word reticent were nonexistent. (Note that even if the “relucant” meaning completely overtakes the traditional one, we’ll still have words like reserved and taciturn.) Furthermore, it’s possible that certain words lose their original meanings because they weren’t very useful meanings to begin with. Talking about the word decimate for example, Jan Freeman says, “We don’t especially need a term that means ‘kill one in 10.’” So even if we accept the idea that preserving distinctions is a good thing, we need to ask whether this distinction is a boon to the language and its speakers.

And if defending fine distinctions and precise definitions is such a noble cause, why don’t prescriptivists scour the lexicon for distinctions that can be made finer and definitions that can be made more precise? Why don’t we busy ourselves with coining new words to convey new meanings that would be useful to English speakers? Hart asks whether there can be creative mutations, but he never gives an example of one or even speculates on what one might look like. Perhaps to him all mutations are destructive. Or perhaps there’s some unexplained reason why defending existing meanings is noble but creating new ones is not. Hart never says.

At the end of the day, my question is whether there really is any worth to prescriptivism. Have the activities of prescriptivists actually improved our language—or at least kept it from degenerating—or is it just an excuse to rail against people for their lexical ignorance? Sometimes, when I read articles like Hart’s, I’m inclined to think it’s the latter. I don’t see how his litany of peeves contributes much to the “clarity, precision, subtlety, nuance, and poetic richness” of language, and I think his warning against the “leveling drabness of mass culture” reveals his true intent—he wants to maintain an aristocratic language for himself and other like-minded individuals.

But I don’t think this is what prescriptivism really is, or at least not what it should be. So does prescriptivism have value? I think so, but I’m not entirely sure what it is. To be honest, I’m still sorting out my feelings about prescriptivism. I know I frequently rail against bad prescriptivism, but I certainly don’t think all prescriptivism is bad. I get paid to be a prescriber at work, where it’s my job to clean up others’ prose, but I try not to let my own pet peeves determine my approach to language. I know this looks like I’m doing exactly what I criticized Hart for doing—raising a question and then dodging it—but I’m still trying to find the answer myself. Perhaps I’ll get some good, thoughtful comments on the issue. Perhaps I just need more time to mull it over and sort out my feelings. At any rate, this post is already too long, so I’ll have to leave it for another time.

By

Gray, Grey, and Circular Prescriptions

A few days ago John McIntyre took a whack at the Associated Press Stylebook’s penchant for flat assertions, this time regarding the spelling of gray/grey. McIntyre noted that gray certainly is more common in American English but that grey is not a misspelling.

In the comments I mused that perhaps gray is only more common because of prescriptions like this one. John Cowan noted that gray is the main head word in Webster’s 1828 dictionary, with grey cross-referenced to it, saying, “So I think we can take it that “gray” has been the standard AmE spelling long before the AP stylebook, or indeed the AP, were in existence.”

But I don’t think Webster’s dictionary really proves that at all. When confronted with multiple spellings of a word, lexicographers must choose which one to include as the main entry in the dictionary. Webster’s choice of gray over grey may have been entirely arbitrary. Furthermore, considering that he was a crusader for spelling reform, I don’t think we can necessarily take the spellings in his dictionary as evidence of what was more common or standard in American English.

So I headed over to Mark Davies’ Corpus of Historical American English to do a little research. I searched for both gray and grey as adjectives and came up with this. The grey line represents the total number of tokens per million words for both forms.

gray and grey in tokens per million words

Up until about the 1840s, gray and grey were about neck and neck. After that, gray really takes off while grey languishes. Now, I realize that this is a rather cursory survey of their historical distribution, and the earliest data in this corpus predates Webster’s dictionary by only a couple of decades. I don’t know how to explain the growth of gray/grey in the 1800s. But in spite of these problems, it appears that there are some very clear-cut trend lines—gray became overwhelmingly more common, but grey has severely diminished but not quite disappeared from American English.

This ties in nicely with a point I’ve made before: descriptivism and prescriptivism are not entirely separable, and there is considerable interplay between the two. It may be that Webster really was describing the linguistic scene as he saw it, choosing gray because he felt that it was more common, or it may be that his choice of gray was arbitrary or influenced by his personal preferences.

Either way, his decision to describe the word in a particular way apparently led to a prescriptive feedback loop: people chose to use the spelling gray because it was in the dictionary, reinforcing its position as the main entry in the dictionary and leading to its ascendancy over grey and eventually to the AP Stylebook‘s tweet about its preferred status. What may have started as a value-neutral decision by Webster about an utterly inconsequential issue of spelling variability has become an imperative to editors . . . about what is still an utterly inconsequential issue of spelling variability.

Personally, I’ve always had a soft spot for grey.

By

Scriptivists Revisited

Before I begin: I know—it’s been a terribly, horribly, unforgivably long time since my last post. Part of it is that I’m often busy with grad school and work and family, and part of it is that I’ve been thinking an awful lot lately about prescriptivism and descriptivism and linguists and editors and don’t really know where to begin.

I know that I’ve said some harsh things about prescriptivists before, but I don’t actually hate prescriptivism in general. As I’ve said before, prescriptivism and descriptivism are not really diametrically opposed, as some people believe they are. Stan Carey explores some of the common ground between the two in a recent post, and I think there’s a lot more to be said about the issue.

I think it’s possible to be a descriptivist and prescriptivist simultaneously. In fact, I think it’s difficult if not impossible to fully disentangle the two approaches. The fact is that many or most prescriptive rules are based on observed facts about the language, even though those facts may be incomplete or misunderstood in some way. Very seldom does anyone make up a rule out of whole cloth that bears no resemblance to reality. Rules often arise because someone has observed a change or variation in the language and is seeking to slow or reverse that change (as in insisting that “comprised of” is always an error) or to regularize the variation (as in insisting that “which” be used for nonrestrictive relative clauses and “that” for restrictive ones).

One of my favorite language blogs, Motivated Grammar, declares “Prescriptivism must die!” but to be honest, I’ve never quite been comfortable with that slogan. Now, I love a good debunking of language myths as much as the next guy—and Gabe Doyle does a commendable job of it—but not all prescriptivism is a bad thing. The impulse to identify and fix potential problems with the language is a natural one, and it can be used for both good and ill. Just take a look at the blogs of John E. McIntyre, Bill Walsh, and Jan Freeman for examples of well-informed, sensible language advice. Unfortunately, as linguists and many others know, senseless language advice is all too common.

Linguists often complain about and debunk such bad language advice—and rightly so, in my opinion—but I think in doing so they often make the mistake of dismissing prescriptivism altogether. Too often linguists view prescriptivism as an annoyance to be ignored or as a rival approach that must be quashed, but either way they miss the fact that prescriptivism is a metalinguistic phenomenon worth exploring and understanding. And why is it worth exploring? Because it’s an essential part of how ordinary speakers—and even linguists—use language in their daily lives, whether they realize it or not.

Contrary to what a lot of linguists say, language isn’t really a natural phenomenon—it’s a learned behavior. And as with any other human behavior, we generally strive to make our language match observed standards. Or as Emily Morgan so excellently says in a guest post on Motivated Grammar, “Language is something that we as a community of speakers collectively create and reinvent each time we speak.” She says that this means that language is “inextricably rooted in a descriptive generalization about what that community does,” but it also means that it is rooted in prescriptive notions of language. Because when speakers create and reinvent language, they do so by shaping their language to fit listeners’ expectations.

That is, for the most part, there’s no difference in speakers’ minds between what they should do with language and what they do do with language. They use language the way they do because they feel as though they should, and this in turn reinforces the model that influences everyone else’s behavior. I’ve often reflected on the fact that style guides like The Chicago Manual of Style will refer to dictionaries for spelling issues—thus prescribing how to spell—but these dictionaries simply describe the language found in edited writing. Description and prescription feed each other in an endless loop. This may not be mathematical logic, but it is a sort of logic nonetheless. Philosophers love to say that you can’t derive an ought from an is, and yet people do nonetheless. If you want to fit in with a certain group, then you should behave in a such a way as to be accepted by that group, and that group’s behavior is simply an aggregate of the behaviors of everyone else trying to fit in.

And at this point, linguists are probably thinking, “And people should be left alone to behave the way they wish to behave.” But leaving people alone means letting them decide which behaviors to favor and which to disfavor—that is, which rules to create and enforce. Linguists often criticize those who create and propagate rules, as if such rules are bad simply as a result of their artificiality, but, once again, the truth is that all language is artificial; it doesn’t exist until we make it exist. And if we create it, why should we always be coolly dispassionate about it? Objectivity might be great in the scientific study of language, but why should language users approach language the same way? Why should we favor “natural” or “spontaneous” changes and yet disfavor more conscious changes?

This is something that Deborah Cameron addresses in her book Verbal Hygiene (which I highly, highly recommend)—the notion that “spontaneous” or “natural” changes are okay, while deliberate ones are meddlesome and should be resisted. As Cameron counters, “If you are going to make value judgements at all, then surely there are more important values than spontaneity. How about truth, beauty, logic, utility?” (1995, 20). Of course, linguists generally argue that an awful lot of prescriptions do nothing to create more truth, beauty, logic, or utility, and this is indeed a problem, in my opinion.

But when linguists debunk such spurious prescriptions, they miss something important: people want language advice from experts, and they’re certainly not getting it from linguists. The industry of bad language advice exists partly because the people who arguably know the most about how language really works—the linguists—aren’t at all interested in giving advice on language. Often they take the hands-off attitude exemplified in Robert Hall’s book Leave Your Language Alone, crying, “Linguistics is descriptive, not prescriptive!” But in doing so, linguists are nonetheless injecting themselves into the debate rather than simply observing how people use language. If an objective, hands-off approach is so valuable, then why don’t linguists really take their hands off and leave prescriptivists alone?

I think the answer is that there’s a lot of social value in following language rules, whether or not they are actually sensible. And linguists, being the experts in the field, don’t like ceding any social or intellectual authority to a bunch of people that they view as crackpots and petty tyrants. They chafe at the idea that such ill-informed, superstitious advice—what Language Log calls “prescriptivist poppycock”—can or should have any value at all. It puts informed language users in the position of having to decide whether to follow a stupid rule so as to avoid drawing the ire of some people or to break the rule and thereby look stupid to those people. Arnold Zwicky explores this conundrum in a post titled “Crazies Win.”

Note something interesting at the end of that post: Zwicky concludes by giving his own advice—his own prescription—regarding the issue of split infinitives. Is this a bad thing? No, not at all, because prescriptivism is not the enemy. As John Algeo said in an article in College English, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense” (“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 [December 1969]: 276). As I’ve said before, the nonsense is abundant. Just look at this awful Reader’s Digest column or this article on a Monster.com site for teachers for a couple recent examples.

Which brings me back to a point I’ve made before: linguists need to be more involved in not just educating the public about language, but in giving people the sensible advice they want. Trying to kill prescriptivism is not the answer to the language wars, and truly leaving language alone is probably a good way to end up with a dead language. Exploring it and trying to figure out how best to use it—this is what keeps language alive and thriving and interesting. And that’s good for prescriptivists and descriptivists alike.

By

Linguists and Straw Men

Sorry I haven’t posted in so long (I know I say that a lot)—I’ve been busy with school and things. Anyway, a couple months back I got a comment on an old post of mine, and I wanted to address it. I know it’s a bit lame to respond to two-month-old comments, but it was on a two-year-old post, so I figure it’s okay.

The comment is here, under a post of mine entitled “Scriptivists”. I believe the comment is supposed to be a rebuttal of that post, but I’m a little confused by the attempt. The commenter apparently accuses me of burning straw men, but ironically, he sets up a massive straw man of his own.

His first point seems to make fun of linguists for using technical terminology, but I’m not sure what that really proves. After all, technical terminology allows you to be very specific about abstract or complicated issues, so how is that really a criticism? I suppose it keeps a lot of laypeople from understanding what you’re saying, but if that’s the worst criticism you’ve got, then I guess I’ve got to shrug my shoulders and say, “Guilty as charged.”

The second point just makes me scratch my head. Using usage evidence from the greatest writers is a bad thing now? Honestly, how do you determine what usage features are good and worthy of emulation if not by looking to the most respected writers in the language?

The last point is just stupid. How often do you see Geoffrey Pullum or Languagehat or any of the other linguistics bloggers whipping out the fact that they have graduate degrees?

And I must disagree with Mr. Kevin S. that the “Mrs. Grundys” of the world don’t actually exist. I’ve heard too many stupid usage superstitions being perpetuated today and seen too much Strunk & White worship to believe that that sort of prescriptivist is extinct. Take, for example, Sonia Sotomayor, who says that split infinities make her “blister”. Or takeone of my sister-in-law’s professors, who insisted that her students could not use the following features in their writing:

  • The first person
  • The passive voice
  • Phrases like “this paper will show . . .” or “the data suggest . . .” because, according to her, papers are not capable of showing and data is not capable of suggesting.

How, exactly, are you supposed to write an academic paper without resorting to one of those devices—none of which, by the way, are actually wrong—at one time or another? These proscriptions were absolutely nonsensical, supported by neither logic nor usage nor common sense.

There’s still an awful lot of absolute bloody nonsense coming from the prescriptivists of the world. (Of course, this is not to say that all or even most prescriptivists are like this; take, for example, the inimitable John McIntyre, who is one of the most sensible and well-informed prescriptivists I’ve ever encountered.) And sorry to say, I don’t see the same sort of stubborn and ill-informed arguments coming from the descriptivists’ camp. And I’m pretty sure I’ve never seen a descriptivist who resembled the straw man that Kevin S. constructed.

By

Rules Are Rules

Recently I was involved in an online discussion about the pronunciation of the word the before vowels. Someone wanted to know if it was pronounced /ði/ (“thee”) before vowels only in singing, or if it was a general rule of speech as well. His dad had said it was a rule, but he had never heard it before and wondered if maybe it was more of a convention than a rule. Throughout the conversation, several more people expressed similar opinions—they’d never heard this rule before and they doubted whether it was really a rule at all.

There are a few problems here. First of all, not everybody means exactly the same thing when they talk about rules. It’s like when laymen dismiss evolution because it’s “just a theory.” They forget that gravity is also just a theory. And when laymen talk about linguistic rules, they usually mean prescriptive rules. Prescriptive rules usually state that a particular thing should be done, which typically implies that it often isn’t done.

But when linguists talk about rules, they mean descriptive ones. Think of it this way: if you were going to teach a computer how to speak English fluently, what would it need to know? Well, one tiny little detail that it would need to know is that the word the is pronounced with a schwa (/ðə/) except when it is stressed or followed by a vowel. Nobody needs to be taught this rule, except for non-native speakers, because we all learn it by hearing it when we’re children. And thus it follows that it’s never taught in English class, so it throws some people for a bit of a loop when they heard it called a rule.

But even on the prescriptivist side of things, not all rules are created equal. There are a lot of rules that are generally covered in English classes, and they’re usually taught as simple black-and-white declarations: x is right and y is wrong. When people ask me questions about language, they usually seem to expect answers along these lines. Many issues of grammar and usage are complicated and have no clear right wrong answer. Same with style—open up two different style guides, and you’ll often find two (or more) ways to punctuate, hyphenate, and capitalize. A lot of times these things boil down to issues of formality, context, and personal taste.

Unfortunately, most of us hear language rules expressed as inviolable laws all the way through public school and probably into college. It’s hard to overcome a dozen years or more of education on a subject and start to learn that maybe things aren’t as simple as you’ve been told, that maybe those trusted authorities and gatekeepers of the language, the English teachers, were not always well-informed. But as writing becomes more and more important in modern life, it likewise becomes more important to teach people meaningful, well-founded rules that aren’t two centuries old. It’s time for English class to get educated.

%d bloggers like this: