Arrant Pedantry

By

What Descriptivism Is and Isn’t

A few weeks ago, the New Yorker published what is nominally a review of Henry Hitchings’ book The Language Wars (which I still have not read but have been meaning to) but which was really more of a thinly veiled attack on what its author, Joan Acocella, sees as the moral and intellectual failings of linguistic descriptivism. In what John McIntyre called “a bad week for Joan Acocella,” the whole mess was addressed multiple times by various bloggers and other writers.* I wanted to write about it at the time but was too busy, but then the New Yorker did me a favor by publishing a follow-up, “Inescapably, You’re Judged by Your Language”, which was equally off-base, so I figured that the door was still open.

I suspected from the first paragraph that Acocella’s article was headed for trouble, and the second paragraph quickly confirmed it. For starters, her brief description of the history and nature of English sounds like it’s based more on folklore than fact. A lot of people lived in Great Britain before the Anglo-Saxons arrived, and their linguistic contributions were effectively nil. But that’s relatively small stuff. The real problem is that she doesn’t really understand what descriptivism is, and she doesn’t understand that she doesn’t understand, so she spends the next five pages tilting at windmills.

Acocella says that descriptivists “felt that all we could legitimately do in discussing language was to say what the current practice was.” This statement is far too narrow, and not only because it completely leaves out historical linguistics. As a linguist, I think it’s odd to describe linguistics as merely saying what the current practice is, since it makes it sound as though all linguists study is usage. Do psycholinguists say what the current practice is when they do eye-tracking studies or other psychological experiments? Do phonologists or syntacticians say what the current practice is when they devise abstract systems of ordered rules to describe the phonological or syntactic system of a language? What about experts in translation or first-language acquisition or computational linguistics? Obviously there’s far more to linguistics than simply saying what the current practice is.

But when it does come to describing usage, we linguists love facts and complexity. We’re less interested in declaring what’s correct or incorrect than we are in uncovering all the nitty-gritty details. It is true, though, that many linguists are at least a little antipathetic to prescriptivism, but not without justification. Because we linguists tend to deal in facts, we take a rather dim view of claims about language that don’t appear to be based in fact, and, by extension, of the people who make those claims. And because many prescriptions make assertions that are based in faulty assumptions or spurious facts, some linguists become skeptical or even hostile to the whole enterprise.

But it’s important to note that this hostility is not actually descriptivism. It’s also, in my experience, not nearly as common as a lot of prescriptivists seem to assume. I think most linguists don’t really care about prescriptivism unless they’re dealing with an officious copyeditor on a manuscript. It’s true that some linguists do spend a fair amount of effort attacking prescriptivism in general, but again, this is not actually descriptivism; it’s simply anti-prescriptivism.

Some other linguists (and some prescriptivists) argue for a more empirical basis for prescriptions, but this isn’t actually descriptivism either. As Language Log’s Mark Liberman argued here, it’s just prescribing on the basis of evidence rather than person taste, intuition, tradition, or peevery.

Of course, all of this is not to say that descriptivists don’t believe in rules, despite what the New Yorker writers think. Even the most anti-prescriptivist linguist still believes in rules, but not necessarily the kind that most people think of. Many of the rules that linguists talk about are rather abstract schematics that bear no resemblance to the rules that prescriptivists talk about. For example, here’s a rather simple one, the rule describing intervocalic alveolar flapping (in a nutshell, the process by which a word like latter comes to sound like ladder) in some dialects of English:

intervocalic alveolar flapping

Rules like these constitute the vast bulk of the language, though they’re largely subconscious and unseen, like a sort of linguistic dark matter. The entire canon of prescriptions (my advisor has identified at least 10,000 distinct prescriptive rules in various handbooks, though only a fraction of these are repeated) seems rather peripheral and inconsequential to most linguists, which is another reason why we get annoyed when prescriptivists insist on their importance or identify standard English with them. Despite what most people think, standard English is not really defined by prescriptive rules, which makes it somewhat disingenuous and ironic for prescriptivists to call us hypocrites for writing in standard English.

If there’s anything disingenuous about linguists’ belief in rules, it’s that we’re not always clear about what kinds of rules we’re talking about. It’s easy to say that we believe in the rules of standard English and good communication and whatnot, but we’re often pretty vague about just what exactly those rules are. But that’s probably a topic for another day.

*A roundup of some of the posts on the recent brouhaha:

Cheap Shot“, “A Bad Week for Joan Acocella“, “Daddy, Are Prescriptivists Real?“, and “Unmourned: The Queen’s English Society” by John McIntyre

Rules and Rules” and “A Half Century of Usage Denialism” by Mark Liberman

Descriptivists as Hypocrites (Again)” by Jan Freeman

Ignorant Blathering at The New Yorker”, by Stephen Dodson, aka Languagehat

Re: The Language Wars” and “False Fronts in the Language Wars” by Steven Pinker

The New Yorker versus the Descriptivist Specter” by Ben Zimmer

Speaking Truth about Power” by Nancy Friedman

Sator Resartus” by Ben Yagoda

I’m sure there are others that I’ve missed. If you know of any more, feel free to make note of them in the comments.

By

Comprised of Fail

A few days ago on Twitter, John McIntyre wrote, “A reporter has used ‘comprises’ correctly. I feel giddy.” And a couple of weeks ago, Nancy Friedman tweeted, “Just read ‘is comprised of’ in a university’s annual report. I give up.” I’ve heard editors confess that they can never remember how to use comprise correctly and always have to look it up. And recently I spotted a really bizarre use in Wired, complete with a subject-verb agreement problem: “It is in fact a Meson (which comprise of a quark and an anti-quark). “So what’s wrong with this word that makes it so hard to get right?

I did a project on “comprised of” for my class last semester on historical changes in American English, and even though I knew it was becoming increasingly common even in edited writing, I was still surprised to see the numbers. For those unfamiliar with the rule, it’s actually pretty simple: the whole comprises the parts, and the parts compose the whole. This makes the two words reciprocal antonyms, meaning that they describe opposite sides of a relationship, like buy/sell or teach/learn. Another way to look at it is that comprise essentially means “to be composed of,” while “compose” means “to be comprised in” (note: in, not of). But increasingly, comprise is being used not as an antonym for compose, but as a synonym.

It’s not hard to see why it’s happened. They’re extremely similar in sound, and each is equivalent to the passive form of the other. When “comprises” means the same thing as “is composed of,” it’s almost inevitable that some people are going to conflate the two and produce “is comprised of.” According to the rule, any instance of “comprised of” is an error that should probably be replaced with “composed of.” Regardless of the rule, this usage has risen sharply in recent decades, though it’s still dwarfed by “composed of.” (Though “composed of” appears to be in serious decline. I have no idea why). The following chart shows its frequency in COHA and the Google Books Corpus.

frequency of "comprised of" and "composed of" in COHA and Google Books

Though it still looks pretty small on the chart, “comprised of” now occurs anywhere from 21 percent as often as “composed of” (in magazines) to a whopping 63 percent as often (in speech) according to COCA. (It’s worth noting, of course, that the speech genre in COCA is composed of a lot of news and radio show transcripts, so even though it’s unscripted, it’s not exactly reflective of typical speech.)

frequency of "comprised of" by genre

What I find most striking about this graph is the frequency of “comprised of” in academic writing. It is often held that standard English is the variety of English used by the educated elite, especially in writing. In this case, though, academics are leading the charge in the spread of a nonstandard usage. Like it or not, it’s becoming increasingly more common, and the prestige lent to it by its academic feel is certainly a factor.

But it’s not just “comprised of” that’s the problem; remember that the whole comprises the parts, which means that comprise should be used with singular subjects and plural objects (or multiple subjects with multiple respective objects, as in The fifty states comprise some 3,143 counties; each individual state comprises many counties). So according to the rule, not only is The United States is comprised of fifty states an error, but so is The fifty states comprise the United States.

It can start to get fuzzy, though, when either the subject or the object is a mass or collective noun, as in “youngsters comprise 17% of the continent’s workforce,” to take an example from Mark Davies’ COCA. This kind of error may be harder to catch, because the relationship between parts and whole is a little more abstract.

And with all the data above, it’s important to remember that we’re seeing things that have made it into print. As I said above, many editors have to look up the rule every time they encounter a form of “comprise” in print, meaning that they’re more liable to make mistakes. It’s possible that many more editors don’t even know that there is a rule, and so they read past it without a second thought.

Personally, I gave up on the rule a few years ago when one day it struck me that I couldn’t recall the last time I’d seen it used correctly in my editing. It’s never truly ambiguous (though if you can find an ambiguous example that doesn’t require willful misreading, please share), and it’s safe to assume that if nearly all of our authors who use comprise do so incorrectly, then most of our readers probably won’t notice, because they think that’s the correct usage.

And who’s to say it isn’t correct now? When it’s used so frequently, especially by highly literate and highly educated writers and speakers, I think you have to recognize that the rule has changed. To insist that it’s always an error, no matter how many people use it, is to deny the facts of usage. Good usage has to have some basis in reality; it can’t be grounded only in the ipse dixits of self-styled usage authorities.

And of course, it’s worth noting that the “traditional” meaning of comprise is really just one in a long series of loosely related meanings the word has had since it was first borrowed into English from French in the 1400s, including “to seize,” “to perceive or comprehend,” “to bring together,” and “to hold.” Perhaps the new meaning of “compose” (which in reality is over two hundred years old at this point) is just another step in the evolution of the word.

By

The Value of Prescriptivism

Last week I asked rather skeptically whether prescriptivism had moral worth. John McIntyre was interested by my question and musing in the last paragraph, and he took up the question (quite admirably, as always) and responded with his own thoughts on prescriptivism. What I see is in his post is neither a coherent principle nor an innately moral argument, as Hart argued, but rather a set of sometimes-contradictory principles mixed with personal taste—and I think that’s okay.

Even Hart’s coherent principle is far from coherent when you break it down. The “clarity, precision, subtlety, nuance, and poetic richness” that he touts are really a bundle of conflicting goals. Clear wording may come at the expense of precision, subtlety, and nuance. Subtlety may not be very clear or precise. And so on. And even if these are all worthy goals, there may be many more that are missing.

McIntyre notes several more goals for practical prescriptivists like editors, including effectiveness, respect for an author’s voice, consistency with a set house style, and consideration of reader reactions, which is a quagmire in its own right. As McIntyre notes, some readers may have fits when they see sentence-disjunct “hopefully”, while other readers may find workarounds like “it is to be hoped that” to be stilted.

Of course, any appeal to the preferences of the reader (which is, in a way, more of a construct than a real entity) still requires decision making: which readers are you appealing to? Many of those who give usage advice seem to defer to the sticklers and pedants, even when it can be shown that they’re pretty clearly wrong or at least holding to outdated and somewhat silly notions. Grammar Girl, for example, guides readers through the arguments for and against “hopefully”, repeatedly saying that she hopes it becomes acceptable someday (note how carefully she avoids using “hopefully” herself, even though she claims to support it) but ultimately shies away from the usage, saying that you should avoid it for now because it’s not acceptable yet. (I’ll write about the strange reasoning presented here some other time.)

But whether or not you give in to the pedants and cranks who write angry letters to lecture you on split infinitives and stranded prepositions, it’s still clear that there’s value in considering the reader’s wishes while writing and editing. The author wants to communicate something to an audience; the audience presumably wants to receive that communication. It’s in both parties’ best interests if that communication goes off without a hitch, which is where prescriptivism can come in.

As McIntyre already said, this doesn’t give you an instant answer to every question, it can give you some methods of gauging roughly how acceptable certain words or constructions are. Ben Yagoda provides his own “somewhat arbitrary metric” for deciding when to fight for a traditional meaning and when to let it go. But the key word here is “arbitrary”; there is no absolute truth in usage, no clear, authoritative source to which you can appeal to solve these questions.

Nevertheless, I believe the prescriptive motivation—the desire to make our language as good as it can be—is, at its core, a healthy one. It leads us to strive for clear and effective communication. It leads us to seek out good language to use as a model. And it slows language change and helps to ensure that writing will be more understandable to audiences that are removed spatially and temporally. But when you try to turn this into a coherent principle to instruct writers on individual points of usage, like transpire or aggravate or enormity, well, then you start running into trouble, because that approach favors fiat over reason and evidence. But I think that an interest in clear and effective language, tempered with a healthy dose of facts and an acknowledgement that the real truth is often messy, can be a boon to all involved.

By

Does Prescriptivism Have Moral Worth?

I probably shouldn’t be getting into this again, but I think David Bentley Hart’s latest post on language (a follow-up to the one I last wrote about) deserves a response. You see, even though he’s no longer cloaking his peeving with the it’s-just-a-joke-but-no-seriously defense, I think he’s still cloaking his arguments in something else: spurious claims about the nature of descriptivism and the rational and moral superiority of prescriptivism. John McIntyre has already taken a crack at these claims, and I think he’s right on: Hart’s description of descriptivists doesn’t match any descriptivists I know, and his claims about prescriptivism’s rational and moral worth are highly suspect.

Hart gets off to bad start when he says that “most of [his convictions] require no defense” and then says that “if you can find a dictionary that, say, allows ‘reluctant’ as a definition of ‘reticent,’ you will also find it was printed in Singapore under the auspices of ‘The Happy Luck Goodly Englishing Council.’” Even when he provides a defense, he’s wrong: the Oxford English Dictionary contains precisely that definition, sense 2: “Reluctant to perform a particular action; hesitant, disinclined. Chiefly with about, or to do something.” The first illustrative quotation is from 1875, only 50 years after the first quote for the traditionally correct definition: “The State registrar was just as reticent to give us information.” So much for the Happy Luck Goodly Englishing Council. (Oh, wait, let me guess—this is just another self-undermining flippancy.)

I’m glad that Hart avoids artificial rules such as the proscription against restrictive which and recognizes that “everyone who cares about such matters engages in both prescription and description, often confusing the two”—a point which many on both sides fail to grasp. But I’m disappointed when he says, “The real question, at the end of the day, is whether any distinction can be recognized, or should be maintained, between creative and destructive mutations,” and then utterly fails to address the question. Instead he merely defends his peeves and denigrates those who argue against his peeves without embracing the disputed senses themselves as hypocrites. But I don’t want to get embroiled in discussions about whether reticent to mean “reluctant” is right or wrong or has a long, noble heritage or is an ignorant vulgarism—that’s all beside the point and doesn’t get to the claims Hart employs to justify his peeves.

But near the end, he does say that his “aesthetic prejudice” is also a “coherent principle” because “persons can mean only what they have the words to say, and so the finer our distinctions and more precise our definitions, the more we are able to mean.” On the surface this may seem like a nice sentiment, but I don’t think it’s nearly as coherent as Hart would like to think. First of all, it smacks of the Whorfian hypothesis, the idea that words give you the power to mean things that you couldn’t otherwise mean. I’m fairly confident I could mean “disinclined to speak” even if the word reticent were nonexistent. (Note that even if the “relucant” meaning completely overtakes the traditional one, we’ll still have words like reserved and taciturn.) Furthermore, it’s possible that certain words lose their original meanings because they weren’t very useful meanings to begin with. Talking about the word decimate for example, Jan Freeman says, “We don’t especially need a term that means ‘kill one in 10.’” So even if we accept the idea that preserving distinctions is a good thing, we need to ask whether this distinction is a boon to the language and its speakers.

And if defending fine distinctions and precise definitions is such a noble cause, why don’t prescriptivists scour the lexicon for distinctions that can be made finer and definitions that can be made more precise? Why don’t we busy ourselves with coining new words to convey new meanings that would be useful to English speakers? Hart asks whether there can be creative mutations, but he never gives an example of one or even speculates on what one might look like. Perhaps to him all mutations are destructive. Or perhaps there’s some unexplained reason why defending existing meanings is noble but creating new ones is not. Hart never says.

At the end of the day, my question is whether there really is any worth to prescriptivism. Have the activities of prescriptivists actually improved our language—or at least kept it from degenerating—or is it just an excuse to rail against people for their lexical ignorance? Sometimes, when I read articles like Hart’s, I’m inclined to think it’s the latter. I don’t see how his litany of peeves contributes much to the “clarity, precision, subtlety, nuance, and poetic richness” of language, and I think his warning against the “leveling drabness of mass culture” reveals his true intent—he wants to maintain an aristocratic language for himself and other like-minded individuals.

But I don’t think this is what prescriptivism really is, or at least not what it should be. So does prescriptivism have value? I think so, but I’m not entirely sure what it is. To be honest, I’m still sorting out my feelings about prescriptivism. I know I frequently rail against bad prescriptivism, but I certainly don’t think all prescriptivism is bad. I get paid to be a prescriber at work, where it’s my job to clean up others’ prose, but I try not to let my own pet peeves determine my approach to language. I know this looks like I’m doing exactly what I criticized Hart for doing—raising a question and then dodging it—but I’m still trying to find the answer myself. Perhaps I’ll get some good, thoughtful comments on the issue. Perhaps I just need more time to mull it over and sort out my feelings. At any rate, this post is already too long, so I’ll have to leave it for another time.

By

It’s just a joke. But no, seriously.

I know I just barely posted about the rhetoric of prescriptivism, but it’s still on my mind, especially after the recent post by David Bentley Hart and the responses by response by John E. McIntyre (here and here) and Robert Lane Greene. I know things are just settling down, but my intent here is not to throw more fuel on the fire, but to draw attention to what I believe is a problematic trend in the rhetoric of prescriptivism. Hart claims that his piece is just some light-hearted humor, but as McIntyre, Greene, and others have complained, it doesn’t really feel like humor.

That is, while it is clear that Hart doesn’t really believe that the acceptance of solecisms leads to the acceptance of cannibalism, it seems that he really does believe that solecisms are a serious problem. Indeed, Hart says, “Nothing less than the future of civilization itself is at issue—honestly—and I am merely doing my part to stave off the advent of an age of barbarism.” If it’s all a joke, as he says, then this statement is somewhat less than honest. And as at least one person says in the comments, Hart’s style is close to self-parody. (As an intellectual exercise, just try to imagine what a real parody would look like.) Perhaps I’m just being thick, but I can only see two reasons for such a style: first, it’s a genuine parody designed to show just how ridiculous the peevers are, or second, it’s a cover for genuine peeving.

I’ve seen this same phenomenon at work in the writings of Lynne Truss, Martha Brockenbrough, and others. They make some ridiculously over-the-top statements about the degenerate state of language today, they get called on it, and then they or their supporters put up the unassailable defense: It’s just a joke, see? Geez, lighten up! Also, you’re kind of a dimwit for not getting it.

That is, not only is it a perfect defense for real peeving, but it’s a booby-trap for anyone who dares to criticize the peever—by refusing to play the game, they put themselves firmly in the out group, while the peeve-fest typically continues unabated. But as Arnold Zwicky once noted, the “dead-serious advocacy of what [they take] to be the standard rules of English . . . makes the just-kidding defense of the enterprise ring hollow.” But I think it does more than just that: I think it undermines the credibility of prescriptivism in general. Joking or not, the rhetoric is polarizing and admits of no criticism. It reinforces the notion that “Discussion is not part of the agenda of the prescriptive grammarian.”[1] It makes me dislike prescriptivism in general, even though I actually agree with several of Hart’s points of usage.

As I said above, the point of this post was not to reignite a dying debate between Hart and his critics, but to draw attention to what I think is a serious problem surrounding the whole issue. In other words, I may not be worried about the state of the language, but I certainly am worried about the state of the language debate.

  1. [1] James Milroy, “The Consequences of Standardisation in Descriptive Linguistics,” in Standard English: The Widening Debate, ed. Tony Bex and Richard J. Watts (New York: Routledge, 1999), 21.

By

Who, That, and the Nature of Bad Rules

A couple of weeks ago the venerable John E. McIntyre blogged about a familiar prescriptive bugbear, the question of that versus who(m). It all started on the blog of the Society for the Promotion of Good Grammar, where a Professor Jacoby, a college English professor, wrote in to share his justification for the rule, which is that you should avoid using that which human referents because it depersonalizes them. He calls this justification “quite profound,” which is probably a good sign that it’s not. Mr. McIntyre, ever the reasonable fellow, tried to inject some facts into the conversation, but apparently to no avail.

What I find most interesting about the whole discussion, however, is not the argument over whether that can be used with human referents, but what the whole argument says about prescriptivism and the way we talk about language and rules. (Indeed, the subject has already been covered very well by Gabe Doyle at Motivated Grammar, who made some interesting discoveries about relative pronoun usage that may indicate some cognitive motivation.) Typically, the person putting forth the rule assumes a priori that the rule is valid, and thereafter it seems that no amount of evidence or argument can change their mind. The entire discussion at the SPOGG blog proceeds without any real attempts to address Mr. McIntyre’s points, and it ends with the SPOGG correspondent who originally kicked off the discussion sullenly taking his football and going home.

James Milroy, an emeritus professor of sociolinguistics at the University of Michigan, once wrote that all rationalizations for prescriptions are post hoc; that is, the rules are taken to be true, and the justifications come afterward and really only serve to give the rule the illusion of validity:

Indeed all prescriptive arguments about correctness that depend on intra-linguistic factors are post-hoc rationalizations. . . . But an intra-linguistic rationalization is not the reason why some usages are believed to be wrong. The reason is that it is simply common sense: everybody knows it, it is part of the culture to know it, and you are an outsider if you think otherwise: you are not a participant in the common culture, and so your views can be dismissed. To this extent, linguists who state that I seen it is not ungrammatical are placing themselves outside the common culture.[1]

This may sound like a rather harsh description of prescriptivism, but I think there’s a lot of truth to it—especially the part about linguists unwittingly setting themselves outside of the culture. Linguists try to play the part of the boy who pointed out that the emperor has no clothes, but instead of breaking the illusion they are at best treated as suspect for not playing along. But the point linguists are trying to make isn’t that there’s no such thing as right or wrong in language (though there are some on the fringe who would make such claims)—they’re simply trying to point out that, quite frequently, the justifications are phony and attention to facts and evidence is mostly nonexistent. There are no real axioms or first principles from which prescriptive rules follow—at least, there don’t seem to be any that are consistently applied and followed to their logical conclusions. Instead the canon of prescriptions is a hodgepodge of style and usage opinions that have been passed down and are generally assumed to have the force of law. There are all kinds of unexamined assumptions packaged into prescriptions and their justifications, such as the following from Professor Jacoby:

  • Our society has a tendency to depersonalize people.
  • Depersonalizing people is bad.
  • Using that as a relative pronoun with human referents depersonalizes them.

There are probably more, but that covers the bases. Note that even if we agree that our society depersonalizes people and that this is a bad thing, it’s still quite a leap from this to the claim that that depersonalizes people. But, as Milroy argued, it’s not really about the justification. It’s about having a justification. You can go on until you’re blue in the face about the history of English relative pronoun usage (for instance, that demonstrative pronouns like that were the only option in Old English, and that this has changed several times over the last millennium and a half, and that it’s only recently that people have begun to claim that that with people is wrong) or about usage in other, related languages (such as German, which uses demonstrative pronouns as relative pronouns), but it won’t make any difference; at best, the person arguing for the rule will superficially soften their stance and make some bad analogies to fashion or ethics, saying that while it might not be a rule, it’s still a good guideline, especially for novices. After all, novices need rules that are more black and white—they need to use training wheels for a while before they can ride unaided. Too bad we also never stop to ask whether we’re actually providing novices with training wheels or just putting sticks in their spokes.

Meanwhile, prescriptivists frequently dismiss all evidence for one reason or another: It’s well established in the history of usage? Well, that just shows that people have always made mistakes. It’s even used by greats like Chaucer, Shakespeare, and other literary giants? Hey, even the greats make mistakes. Either that or they mastered the rules and thus know when it’s okay to break them. People today overwhelmingly break the rule? Well, that just shows how dire the situation is. You literally can’t win, because, as Geoffrey Pullum puts it, “nothing is relevant.”

So if most prescriptions are based on unexamined assumptions and post hoc rationalizations, where does that leave things? Do we throw it all out because it’s a charade? That seems rather extreme. There will always be rules, because that’s simply the nature of people. The question is, how do we establish which rules are valid, and how do we teach this to students and practice it as writers and editors? Honestly, I don’t know, but I know that it involves real research and a willingness to critically evaluate not only the rules but also the assumptions that underlie them. We have to stop having a knee-jerk reaction against linguistic methods and allow them to inform our understanding. And linguists need to learn that rules are not inherently bad. Indeed, as John Algeo put it, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense.”[2]

  1. [1] James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.
  2. [2] “Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.

By

Temblor Trouble

Last week’s earthquake in northern Japan reminded me of an interesting pet peeve of a friend of mine: she hates the word temblor. Before she brought it to my attention, it had never really occurred to me to be bothered by it, but now I can’t help but notice it and be annoyed anytime there’s a news story about an earthquake. Her complaint is that it’s basically a made-up word that only journalists use, and it seems she’s essentially right.

A quick search on Mark Davies’ Corpus of Contemporary American English shows that temblor occurs just over twice as often in newspaper writing as in magazine writing, and more than three times as frequently in newspaper writing as in fiction. It’s effectively nonexistent in academic writing—the only two hits in COCA are actually in Spanish contexts, as are three of the hits under fiction. It’s also worth noting that all of the spoken examples are from news programs. The following chart shows its frequency per million words.

So what’s to explain the strange distribution of this word? I strongly suspect it’s the doing of what John E. McIntyre calls “the dear old, so frequently misguided, Associated Press Stylebook.” I only have a copy of the 2004 edition (which my wife picked up at a yard sale for 50 cents—don’t worry, I wouldn’t waste good money on it), but the entry for temblor (yes, there’s actually an entry for it) merely refers one to earthquakes. That entry goes on for a page and a half about earthquake magnitudes and notable earthquakes of the past before noting that “the word temblor (not tremblor) is a synonym for earthquake.”

I don’t understand why the AP Stylebook needs to point out spelling and synonymy—I thought those were the jobs of dictionaries and thesauruses, respectively—but I find it interesting that it doesn’t list any other synonyms. Thesaurus.com lists convulsion, fault, macroseism, microseism, movement, quake, quaker, seimicity, seism, seismism, shake, shock, slip, temblor, trembler, undulation, and upheaval, though obviously not all of these are equally acceptable synonyms.

So why does temblor get singled out? I honestly don’t know. I do know that journalists are fond of learning synonyms to avoid tiring out common words, and I know that at least some journalists take the practice to unreasonable levels, such as the teacher who made her students memorize 120 synonyms for said. Whatever the reason, journalists seem to have latched on to temblor, though few others outside the fields of newspaper and magazine writing have picked it up.

By

Gray, Grey, and Circular Prescriptions

A few days ago John McIntyre took a whack at the Associated Press Stylebook’s penchant for flat assertions, this time regarding the spelling of gray/grey. McIntyre noted that gray certainly is more common in American English but that grey is not a misspelling.

In the comments I mused that perhaps gray is only more common because of prescriptions like this one. John Cowan noted that gray is the main head word in Webster’s 1828 dictionary, with grey cross-referenced to it, saying, “So I think we can take it that “gray” has been the standard AmE spelling long before the AP stylebook, or indeed the AP, were in existence.”

But I don’t think Webster’s dictionary really proves that at all. When confronted with multiple spellings of a word, lexicographers must choose which one to include as the main entry in the dictionary. Webster’s choice of gray over grey may have been entirely arbitrary. Furthermore, considering that he was a crusader for spelling reform, I don’t think we can necessarily take the spellings in his dictionary as evidence of what was more common or standard in American English.

So I headed over to Mark Davies’ Corpus of Historical American English to do a little research. I searched for both gray and grey as adjectives and came up with this. The grey line represents the total number of tokens per million words for both forms.

gray and grey in tokens per million words

Up until about the 1840s, gray and grey were about neck and neck. After that, gray really takes off while grey languishes. Now, I realize that this is a rather cursory survey of their historical distribution, and the earliest data in this corpus predates Webster’s dictionary by only a couple of decades. I don’t know how to explain the growth of gray/grey in the 1800s. But in spite of these problems, it appears that there are some very clear-cut trend lines—gray became overwhelmingly more common, but grey has severely diminished but not quite disappeared from American English.

This ties in nicely with a point I’ve made before: descriptivism and prescriptivism are not entirely separable, and there is considerable interplay between the two. It may be that Webster really was describing the linguistic scene as he saw it, choosing gray because he felt that it was more common, or it may be that his choice of gray was arbitrary or influenced by his personal preferences.

Either way, his decision to describe the word in a particular way apparently led to a prescriptive feedback loop: people chose to use the spelling gray because it was in the dictionary, reinforcing its position as the main entry in the dictionary and leading to its ascendancy over grey and eventually to the AP Stylebook‘s tweet about its preferred status. What may have started as a value-neutral decision by Webster about an utterly inconsequential issue of spelling variability has become an imperative to editors . . . about what is still an utterly inconsequential issue of spelling variability.

Personally, I’ve always had a soft spot for grey.

By

Scriptivists Revisited

Before I begin: I know—it’s been a terribly, horribly, unforgivably long time since my last post. Part of it is that I’m often busy with grad school and work and family, and part of it is that I’ve been thinking an awful lot lately about prescriptivism and descriptivism and linguists and editors and don’t really know where to begin.

I know that I’ve said some harsh things about prescriptivists before, but I don’t actually hate prescriptivism in general. As I’ve said before, prescriptivism and descriptivism are not really diametrically opposed, as some people believe they are. Stan Carey explores some of the common ground between the two in a recent post, and I think there’s a lot more to be said about the issue.

I think it’s possible to be a descriptivist and prescriptivist simultaneously. In fact, I think it’s difficult if not impossible to fully disentangle the two approaches. The fact is that many or most prescriptive rules are based on observed facts about the language, even though those facts may be incomplete or misunderstood in some way. Very seldom does anyone make up a rule out of whole cloth that bears no resemblance to reality. Rules often arise because someone has observed a change or variation in the language and is seeking to slow or reverse that change (as in insisting that “comprised of” is always an error) or to regularize the variation (as in insisting that “which” be used for nonrestrictive relative clauses and “that” for restrictive ones).

One of my favorite language blogs, Motivated Grammar, declares “Prescriptivism must die!” but to be honest, I’ve never quite been comfortable with that slogan. Now, I love a good debunking of language myths as much as the next guy—and Gabe Doyle does a commendable job of it—but not all prescriptivism is a bad thing. The impulse to identify and fix potential problems with the language is a natural one, and it can be used for both good and ill. Just take a look at the blogs of John E. McIntyre, Bill Walsh, and Jan Freeman for examples of well-informed, sensible language advice. Unfortunately, as linguists and many others know, senseless language advice is all too common.

Linguists often complain about and debunk such bad language advice—and rightly so, in my opinion—but I think in doing so they often make the mistake of dismissing prescriptivism altogether. Too often linguists view prescriptivism as an annoyance to be ignored or as a rival approach that must be quashed, but either way they miss the fact that prescriptivism is a metalinguistic phenomenon worth exploring and understanding. And why is it worth exploring? Because it’s an essential part of how ordinary speakers—and even linguists—use language in their daily lives, whether they realize it or not.

Contrary to what a lot of linguists say, language isn’t really a natural phenomenon—it’s a learned behavior. And as with any other human behavior, we generally strive to make our language match observed standards. Or as Emily Morgan so excellently says in a guest post on Motivated Grammar, “Language is something that we as a community of speakers collectively create and reinvent each time we speak.” She says that this means that language is “inextricably rooted in a descriptive generalization about what that community does,” but it also means that it is rooted in prescriptive notions of language. Because when speakers create and reinvent language, they do so by shaping their language to fit listeners’ expectations.

That is, for the most part, there’s no difference in speakers’ minds between what they should do with language and what they do do with language. They use language the way they do because they feel as though they should, and this in turn reinforces the model that influences everyone else’s behavior. I’ve often reflected on the fact that style guides like The Chicago Manual of Style will refer to dictionaries for spelling issues—thus prescribing how to spell—but these dictionaries simply describe the language found in edited writing. Description and prescription feed each other in an endless loop. This may not be mathematical logic, but it is a sort of logic nonetheless. Philosophers love to say that you can’t derive an ought from an is, and yet people do nonetheless. If you want to fit in with a certain group, then you should behave in a such a way as to be accepted by that group, and that group’s behavior is simply an aggregate of the behaviors of everyone else trying to fit in.

And at this point, linguists are probably thinking, “And people should be left alone to behave the way they wish to behave.” But leaving people alone means letting them decide which behaviors to favor and which to disfavor—that is, which rules to create and enforce. Linguists often criticize those who create and propagate rules, as if such rules are bad simply as a result of their artificiality, but, once again, the truth is that all language is artificial; it doesn’t exist until we make it exist. And if we create it, why should we always be coolly dispassionate about it? Objectivity might be great in the scientific study of language, but why should language users approach language the same way? Why should we favor “natural” or “spontaneous” changes and yet disfavor more conscious changes?

This is something that Deborah Cameron addresses in her book Verbal Hygiene (which I highly, highly recommend)—the notion that “spontaneous” or “natural” changes are okay, while deliberate ones are meddlesome and should be resisted. As Cameron counters, “If you are going to make value judgements at all, then surely there are more important values than spontaneity. How about truth, beauty, logic, utility?” (1995, 20). Of course, linguists generally argue that an awful lot of prescriptions do nothing to create more truth, beauty, logic, or utility, and this is indeed a problem, in my opinion.

But when linguists debunk such spurious prescriptions, they miss something important: people want language advice from experts, and they’re certainly not getting it from linguists. The industry of bad language advice exists partly because the people who arguably know the most about how language really works—the linguists—aren’t at all interested in giving advice on language. Often they take the hands-off attitude exemplified in Robert Hall’s book Leave Your Language Alone, crying, “Linguistics is descriptive, not prescriptive!” But in doing so, linguists are nonetheless injecting themselves into the debate rather than simply observing how people use language. If an objective, hands-off approach is so valuable, then why don’t linguists really take their hands off and leave prescriptivists alone?

I think the answer is that there’s a lot of social value in following language rules, whether or not they are actually sensible. And linguists, being the experts in the field, don’t like ceding any social or intellectual authority to a bunch of people that they view as crackpots and petty tyrants. They chafe at the idea that such ill-informed, superstitious advice—what Language Log calls “prescriptivist poppycock”—can or should have any value at all. It puts informed language users in the position of having to decide whether to follow a stupid rule so as to avoid drawing the ire of some people or to break the rule and thereby look stupid to those people. Arnold Zwicky explores this conundrum in a post titled “Crazies Win.”

Note something interesting at the end of that post: Zwicky concludes by giving his own advice—his own prescription—regarding the issue of split infinitives. Is this a bad thing? No, not at all, because prescriptivism is not the enemy. As John Algeo said in an article in College English, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense” (“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 [December 1969]: 276). As I’ve said before, the nonsense is abundant. Just look at this awful Reader’s Digest column or this article on a Monster.com site for teachers for a couple recent examples.

Which brings me back to a point I’ve made before: linguists need to be more involved in not just educating the public about language, but in giving people the sensible advice they want. Trying to kill prescriptivism is not the answer to the language wars, and truly leaving language alone is probably a good way to end up with a dead language. Exploring it and trying to figure out how best to use it—this is what keeps language alive and thriving and interesting. And that’s good for prescriptivists and descriptivists alike.

By

Linguists and Straw Men

Sorry I haven’t posted in so long (I know I say that a lot)—I’ve been busy with school and things. Anyway, a couple months back I got a comment on an old post of mine, and I wanted to address it. I know it’s a bit lame to respond to two-month-old comments, but it was on a two-year-old post, so I figure it’s okay.

The comment is here, under a post of mine entitled “Scriptivists”. I believe the comment is supposed to be a rebuttal of that post, but I’m a little confused by the attempt. The commenter apparently accuses me of burning straw men, but ironically, he sets up a massive straw man of his own.

His first point seems to make fun of linguists for using technical terminology, but I’m not sure what that really proves. After all, technical terminology allows you to be very specific about abstract or complicated issues, so how is that really a criticism? I suppose it keeps a lot of laypeople from understanding what you’re saying, but if that’s the worst criticism you’ve got, then I guess I’ve got to shrug my shoulders and say, “Guilty as charged.”

The second point just makes me scratch my head. Using usage evidence from the greatest writers is a bad thing now? Honestly, how do you determine what usage features are good and worthy of emulation if not by looking to the most respected writers in the language?

The last point is just stupid. How often do you see Geoffrey Pullum or Languagehat or any of the other linguistics bloggers whipping out the fact that they have graduate degrees?

And I must disagree with Mr. Kevin S. that the “Mrs. Grundys” of the world don’t actually exist. I’ve heard too many stupid usage superstitions being perpetuated today and seen too much Strunk & White worship to believe that that sort of prescriptivist is extinct. Take, for example, Sonia Sotomayor, who says that split infinities make her “blister”. Or takeone of my sister-in-law’s professors, who insisted that her students could not use the following features in their writing:

  • The first person
  • The passive voice
  • Phrases like “this paper will show . . .” or “the data suggest . . .” because, according to her, papers are not capable of showing and data is not capable of suggesting.

How, exactly, are you supposed to write an academic paper without resorting to one of those devices—none of which, by the way, are actually wrong—at one time or another? These proscriptions were absolutely nonsensical, supported by neither logic nor usage nor common sense.

There’s still an awful lot of absolute bloody nonsense coming from the prescriptivists of the world. (Of course, this is not to say that all or even most prescriptivists are like this; take, for example, the inimitable John McIntyre, who is one of the most sensible and well-informed prescriptivists I’ve ever encountered.) And sorry to say, I don’t see the same sort of stubborn and ill-informed arguments coming from the descriptivists’ camp. And I’m pretty sure I’ve never seen a descriptivist who resembled the straw man that Kevin S. constructed.