Arrant Pedantry

By

Rules, Regularity, and Relative Pronouns

The other day I was thinking about relative pronouns and how they get so much attention from usage commentators, and I decided I should write a post about them. I was beaten to the punch by Stan Carey, but that’s okay, because I think I’m going to take it in a somewhat different direction. (And anyway, great minds think alike, right? But maybe you should read his post first, along with my previous post on who and that, if you haven’t already.)

I’m not just talking about that and which but also who, whom, and whose, which is technically a relative possessive adjective. Judging by how often relative pronouns are talked about, you’d assume that most English speakers can’t get them right, even though they’re among the most common words in the language. In fact, in my own research for my thesis, I’ve found that they’re among the most frequent corrections made by copy editors.

So what gives? Why are they so hard for English speakers to get right? The distinctions are pretty clear-cut and can be found in a great many usage and writing handbooks. Some commentators even judgementally declare, “There’s a useful distinction here, and it’s lazy or perverse to pretend otherwise.” But is it really useful, and is it really lazy and perverse to disagree? Or is it perverse to try to inflict a bunch of arbitrary distinctions on speakers and writers?

And arbitrary they are. Many commentators act as if the proposed distinctions between all these words would make things tidier and more regular, but in fact it makes the whole system much more complicated. On the one hand, we have the restrictive/nonrestrictive distinction between that and which. On the other hand, we have the animate/inanimate (or human/nonhuman, if you want to be really strict) distinction between who and that/which. And on the other other hand, there’s the subject/object distinction between who and whom. But there’s no subject/object distinction with that or which, except when it’s the object of a preposition—then you have to use which, unless the preposition is stranded, in which case you can use that. And on the final hand, some people have proscribed whose as an inanimate or nonhuman relative possessive adjective, recommending constructions with of which instead, though this rule isn’t as popular, or at least not as frequently talked about, as the others. (How many hands is that? I’ve lost count.)

Simple, right? To make it all a little clear, I’ve even put it into a nice little table.

The proposed relative pronoun system

This is, in a nutshell, a very lopsided and unusual system. In a comment on my who/that post, Elaine Chaika says, “No natural grammar rule would work that way. Ever.” I’m not entirely convinced of that, because languages can be surprising in the unusual distinctions they make, but I agree that it is at the least typologically unusual.

“But we have to have rules!” you say. “If we don’t, we’ll have confusion!” But we do have rules—just not the ones that are proposed and promoted. The system we really have, in absence of the prescriptions, is basically a distinction between animate who and inanimate which with that overlaying the two. Which doesn’t make distinctions by case, but who(m) does, though this distinction is moribund and has probably only been kept alive by the efforts of schoolteachers and editors.

Whom is still pretty much required when it immediately follows a preposition, but not when the preposition is stranded. Since preposition stranding is extremely common in speech and increasingly common in writing, we’re seeing less and less of whom in this position. Whose is still a little iffy with inanimate referents, as in The house whose roof blew off, but many people say this is alright. Others prefer of which, though this can be awkward: The house the roof of which blew off.

That is either animate or inanimate—only who/which make that distinction—and can be either subject or object but cannot follow a preposition or function as a possessive adjective or nonrestrictively. If the preposition is stranded, as in The man that I gave the apple to, then it’s still allowed. But there’s no possessive thats, so you have to use whose of of which. Again, it’s clearer in table form:

The natural system of relative pronouns

The linguist Jonathan Hope wrote that several distinguishing features of Standard English give it “a typologically unusual structure, while non-standard English dialects follow the path of linguistic naturalness.” He then muses on the reason for this:

One explanation for this might be that as speakers make the choices that will result in standardisation, they unconsciously tend towards more complex structures, because of their sense of the prestige and difference of formal written language. Standard English would then become a ‘deliberately’ difficult language, constructed, albeit unconsciously, from elements that go against linguistic naturalness, and which would not survive in a ‘natural’ linguistic environment.1“Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000), 53.

It’s always tricky territory when you speculate on people’s unconscious motivations, but I think he’s on to something. Note that while the prescriptions make for a very asymmetrical system, the system that people naturally use is moving towards a very tidy and symmetrical distribution, though there are still a couple of wrinkles that are being worked out.

But the important point is that people already follow rules—just not the ones that some prescriptivists think they should.

Notes   [ + ]

1. “Rats, Bats, Sparrows and Dogs: Biology, Linguistics and the Nature of Standard English,” in The Development of Standard English, 1300–1800, ed. Laura Wright (Cambridge: University of Cambridge Press, 2000), 53.

By

Continua, Planes, and False Dichotomies

On Twitter, Erin Brenner asked, “How about a post on prescriptivism/descriptivism as a continuum rather than two sides? Why does it have to be either/or?” It’s a great question, and I firmly believe that it’s not an either-or choice. However, I don’t actually agree that prescriptivism and descriptivism occupy different points on a continuum, so I hope Erin doesn’t mind if I take this in a somewhat different direction from what she probably expected.

The problem with calling the two part of a continuum is that I don’t believe they’re on the same line. Putting them on a continuum, in my mind, implies that they share a common trait that is expressed to greater or lesser degrees, but the only real trait they share is that they are both approaches to language. But even this is a little deceptive, because one is an approach to studying language, while the other is an approach to using it.

I think the reason why we so often treat it as a continuum is that the more moderate prescriptivists tend to rely more on evidence and less on flat assertions. This makes us think of prescriptivists who don’t employ as much facts and evidence as occupying a point further along the spectrum. But I think this point of view does a disservice to prescriptivism by treating it as the opposite of fact-based descriptivism. This leads us to think that at one end, we have the unbiased facts of the language, and somewhere in the middle we have opinions based on facts, and at the other end, where undiluted prescriptivism lies, we have opinions that contradict facts. I don’t think this model makes sense or is really an accurate representation of prescriptivism, but unfortunately it’s fairly pervasive.

In its most extreme form, we find quotes like this one from Robert Hall, who, in defending the controversial and mostly prescription-free Webster’s Third, wrote: “The functions of grammars and dictionaries is to tell the truth about language. Not what somebody thinks ought to be the truth, nor what somebody wants to ram down somebody else’s throat, not what somebody wants to sell somebody else as being the ‘best’ language, but what people actually do when they talk and write. Anything else is not the truth, but an untruth.”1In Harold B. Allen et al., “Webster’s Third New International Dictionary: A Symposium,” Quarterly Journal of Speech 48 (December 1962): 434.

But I think this is a duplicitous argument, especially for a linguist. If prescriptivism is “what somebody thinks ought to be the truth”, then it doesn’t have a truth value, because it doesn’t express a proposition. And although what is is truth, what somebody thinks should be is not its opposite, untruth.

So if descriptivism and prescriptivism aren’t at different points on a continuum, where are they in relation to each other? Well, first of all, I don’t think pure prescriptivism should be identified with evidence-free assertionism, as Eugene Volokh calls it. Obviously there’s a continuum of practice within prescriptivism, which means it must exist on a separate continuum or axis from descriptivism.

I envision the two occupying a space something like this:

graph of descriptivism and prescriptivism

Descriptivism is concerned with discovering what language is without assigning value judgements. Linguists feel that whether it’s standard or nonstandard, correct or incorrect by traditional standards, language is interesting and should be studied. That is, they try to stay on the right side of the graph, mapping out human language in all its complexity. Some linguists like Hall get caught up in trying to tear down prescriptivism, viewing it as a rival camp that must be destroyed. I think this is unfortunate, because like it or not, prescriptivism is a metalinguistic phenomenon that at the very least is worthy of more serious study.

Prescriptivism, on the other hand, is concerned with good, effective, or proper language. Prescriptivists try to judge what best practice is and formulate rules to map out what’s good or acceptable. In the chapter “Grammar and Usage” in The Chicago Manual of Style, Bryan Garner says his aim is to guide “writers and editors toward the unimpeachable uses of language” (16th ed., 5.219, 15th ed., 5.201).

Reasonable or moderate prescriptivists try to incorporate facts and evidence from actual usage in their prescriptions, meaning that they try to stay in the upper right of the graph. Some prescriptivists stray into untruth territory on the left and become unreasonable prescriptivists, or assertionists. No amount of evidence will sway them; in their minds, certain usages are just wrong. They make arguments from etymology or from overly literal or logical interpretations of meaning. And quite often, they say something’s wrong just because it’s a rule.

So it’s clearly not an either-or choice between descriptivism and prescriptivism. The only thing that’s not really clear, in my mind, is how much of prescriptivism is reliable. That is, do the prescriptions actually map out something we could call “good English”? Quite a lot of the rules serve little purpose beyond serving “as a sign that the writer is unaware of the canons of usage”, to quote the usage entry on hopefully in the American Heritage Dictionary (5th ed.). Linguists have been so preoccupied with trying to debunk or discredit prescriptivism that they’ve never really stopped to investigate whether there’s any value to prescriptivists’ claims. True, there have been a few studies along those lines, but I think they’re just scratching the surface of what could be an interesting avenue of study. But that’s a topic for another time.

Notes   [ + ]

1. In Harold B. Allen et al., “Webster’s Third New International Dictionary: A Symposium,” Quarterly Journal of Speech 48 (December 1962): 434.

By

It’s Not Wrong, but You Still Shouldn’t Do It

A couple of weeks ago, in my post “The Value of Prescriptivism,” I mentioned some strange reasoning that I wanted to talk about later—the idea that there are many usages that are not technically wrong, but you should still avoid them because other people think they’re wrong. I used the example of a Grammar Girl post on hopefully wherein she lays out the arguments in favor of disjunct hopefully and debunks some of the arguments against it—and then advises, “I still have to say, don’t do it.” She then adds, however, “I am hopeful that starting a sentence with hopefully will become more acceptable in the future.”

On the face of it, this seems like a pretty reasonable approach. Sometimes the considerations of the reader have to take precedence over the facts of usage. If the majority of your readers will object to your word choice, then it may be wise to pick a different word. But there’s a different way to look at this, which is that the misinformed opinions of a very small but very vocal subset of readers take precedence over the facts and the opinions of others. Arnold Zwicky wrote about this phenomenon a few years ago in a Language Log post titled “Crazies win”.

Addressing split infinitives and the equivocal advice to avoid them unless it’s better not to, Zwicky says that “in practice, [split infinitive as last resort] is scarcely an improvement over [no split infinitives] and in fact works to preserve the belief that split infinitives are tainted in some way.” He then adds that the “only intellectually justifiable advice” is to “say flatly that there’s nothing wrong with split infinitives and you should use them whenever they suit you”. I agree wholeheartedly, and I’ll explain why.

The problem with the it’s-not-wrong-but-don’t-do-it philosophy is that, while it feels like a moderate, open-minded, and more descriptivist approach in theory, it is virtually indistinguishable from the it’s-wrong-so-don’t-do-it philosophy in practice. You can cite all the linguistic evidence you want, but it’s still trumped by the fact that you’d rather avoid annoying that small subset of readers. It pays lip service to the idea of descriptivism informing your prescriptions, but the prescription is effectively the same. All you’ve changed is the justification for avoiding the usage.

Even more neutral and descriptive pieces like this New York Times “On Language” article on singular they ends with a wistful, “It’s a shame that grammarians ever took umbrage at the singular they,” adding, “Like it or not, the universal they isn’t universally accepted — yet. Its fate is now in the hands of the jury, the people who speak the language.” Even though the authors seem to be avoiding giving out advice, it’s still implicit in the conclusion. It’s great to inform readers about the history of usage debates, but what they’ll most likely come away with is the conclusion that it’s wrong—or at least tainted—so they shouldn’t use it.

The worst thing about this waffly kind of advice, I think, is that it lets usage commentators duck responsibility for influencing usage. They tell you all the reasons why it should be alright to use hopefully or split infinitives or singular they, but then they sigh and put them away in the linguistic hope chest, telling you that you can’t use them yet, but maybe someday. Well, when? If all the usage commentators are saying, “It’s not acceptable yet,” at what point are they going to decide that it suddenly is acceptable? If you always defer to the peevers and crazies, it will never be acceptable (unless they all happen to die off without transmitting their ideas to the next generation).

And furthermore, I’m not sure it’s a worthwhile endeavor to try to avoid offending or annoying anyone in your writing. It reminds me of Aesop’s fable of the man, the boy, and the donkey: people will always find something to criticize, so it’s impossible to behave (or write) in such a way as to always avoid criticism. As the old man at the end says, “Please all, and you will please none.” You can’t please everyone, so you have to make a choice: will you please the small but vocal peevers, or the more numerous reasonable people? If you believe there’s nothing technically wrong with hopefully or singular they, maybe you should stand by those beliefs instead of caving to the critics. And perhaps through your reasonable but firm advice and your own exemplary writing, you’ll help a few of those crazies come around.

By

Does Prescriptivism Have Moral Worth?

I probably shouldn’t be getting into this again, but I think David Bentley Hart’s latest post on language (a follow-up to the one I last wrote about) deserves a response. You see, even though he’s no longer cloaking his peeving with the it’s-just-a-joke-but-no-seriously defense, I think he’s still cloaking his arguments in something else: spurious claims about the nature of descriptivism and the rational and moral superiority of prescriptivism. John McIntyre has already taken a crack at these claims, and I think he’s right on: Hart’s description of descriptivists doesn’t match any descriptivists I know, and his claims about prescriptivism’s rational and moral worth are highly suspect.

Hart gets off to bad start when he says that “most of [his convictions] require no defense” and then says that “if you can find a dictionary that, say, allows ‘reluctant’ as a definition of ‘reticent,’ you will also find it was printed in Singapore under the auspices of ‘The Happy Luck Goodly Englishing Council.'” Even when he provides a defense, he’s wrong: the Oxford English Dictionary contains precisely that definition, sense 2: “Reluctant to perform a particular action; hesitant, disinclined. Chiefly with about, or to do something.” The first illustrative quotation is from 1875, only 50 years after the first quote for the traditionally correct definition: “The State registrar was just as reticent to give us information.” So much for the Happy Luck Goodly Englishing Council. (Oh, wait, let me guess—this is just another self-undermining flippancy.)

I’m glad that Hart avoids artificial rules such as the proscription against restrictive which and recognizes that “everyone who cares about such matters engages in both prescription and description, often confusing the two”—a point which many on both sides fail to grasp. But I’m disappointed when he says, “The real question, at the end of the day, is whether any distinction can be recognized, or should be maintained, between creative and destructive mutations,” and then utterly fails to address the question. Instead he merely defends his peeves and denigrates those who argue against his peeves without embracing the disputed senses themselves as hypocrites. But I don’t want to get embroiled in discussions about whether reticent to mean “reluctant” is right or wrong or has a long, noble heritage or is an ignorant vulgarism—that’s all beside the point and doesn’t get to the claims Hart employs to justify his peeves.

But near the end, he does say that his “aesthetic prejudice” is also a “coherent principle” because “persons can mean only what they have the words to say, and so the finer our distinctions and more precise our definitions, the more we are able to mean.” On the surface this may seem like a nice sentiment, but I don’t think it’s nearly as coherent as Hart would like to think. First of all, it smacks of the Whorfian hypothesis, the idea that words give you the power to mean things that you couldn’t otherwise mean. I’m fairly confident I could mean “disinclined to speak” even if the word reticent were nonexistent. (Note that even if the “relucant” meaning completely overtakes the traditional one, we’ll still have words like reserved and taciturn.) Furthermore, it’s possible that certain words lose their original meanings because they weren’t very useful meanings to begin with. Talking about the word decimate, for example, Jan Freeman says, “We don’t especially need a term that means ‘kill one in 10.’” So even if we accept the idea that preserving distinctions is a good thing, we need to ask whether this distinction is a boon to the language and its speakers.

And if defending fine distinctions and precise definitions is such a noble cause, why don’t prescriptivists scour the lexicon for distinctions that can be made finer and definitions that can be made more precise? Why don’t we busy ourselves with coining new words to convey new meanings that would be useful to English speakers? Hart asks whether there can be creative mutations, but he never gives an example of one or even speculates on what one might look like. Perhaps to him all mutations are destructive. Or perhaps there’s some unexplained reason why defending existing meanings is noble but creating new ones is not. Hart never says.

At the end of the day, my question is whether there really is any worth to prescriptivism. Have the activities of prescriptivists actually improved our language—or at least kept it from degenerating—or is it just an excuse to rail against people for their lexical ignorance? Sometimes, when I read articles like Hart’s, I’m inclined to think it’s the latter. I don’t see how his litany of peeves contributes much to the “clarity, precision, subtlety, nuance, and poetic richness” of language, and I think his warning against the “leveling drabness of mass culture” reveals his true intent—he wants to maintain an aristocratic language for himself and other like-minded individuals.

But I don’t think this is what prescriptivism really is, or at least not what it should be. So does prescriptivism have value? I think so, but I’m not entirely sure what it is. To be honest, I’m still sorting out my feelings about prescriptivism. I know I frequently rail against bad prescriptivism, but I certainly don’t think all prescriptivism is bad. I get paid to be a prescriber at work, where it’s my job to clean up others’ prose, but I try not to let my own pet peeves determine my approach to language. I know this looks like I’m doing exactly what I criticized Hart for doing—raising a question and then dodging it—but I’m still trying to find the answer myself. Perhaps I’ll get some good, thoughtful comments on the issue. Perhaps I just need more time to mull it over and sort out my feelings. At any rate, this post is already too long, so I’ll have to leave it for another time.

By

Gray, Grey, and Circular Prescriptions

A few days ago John McIntyre took a whack at the Associated Press Stylebook’s penchant for flat assertions, this time regarding the spelling of gray/grey. McIntyre noted that gray certainly is more common in American English but that grey is not a misspelling.

In the comments I mused that perhaps gray is only more common because of prescriptions like this one. John Cowan noted that gray is the main head word in Webster’s 1828 dictionary, with grey cross-referenced to it, saying, “So I think we can take it that “gray” has been the standard AmE spelling long before the AP stylebook, or indeed the AP, were in existence.”

But I don’t think Webster’s dictionary really proves that at all. When confronted with multiple spellings of a word, lexicographers must choose which one to include as the main entry in the dictionary. Webster’s choice of gray over grey may have been entirely arbitrary. Furthermore, considering that he was a crusader for spelling reform, I don’t think we can necessarily take the spellings in his dictionary as evidence of what was more common or standard in American English.

So I headed over to Mark Davies’ Corpus of Historical American English to do a little research. I searched for both gray and grey as adjectives and came up with this. The grey line represents the total number of tokens per million words for both forms.

gray and grey in tokens per million words

Up until about the 1840s, gray and grey were about neck and neck. After that, gray really takes off while grey languishes. Now, I realize that this is a rather cursory survey of their historical distribution, and the earliest data in this corpus predates Webster’s dictionary by only a couple of decades. I don’t know how to explain the growth of gray/grey in the 1800s. But in spite of these problems, it appears that there are some very clear-cut trend lines—gray became overwhelmingly more common, but grey has severely diminished but not quite disappeared from American English.

This ties in nicely with a point I’ve made before: descriptivism and prescriptivism are not entirely separable, and there is considerable interplay between the two. It may be that Webster really was describing the linguistic scene as he saw it, choosing gray because he felt that it was more common, or it may be that his choice of gray was arbitrary or influenced by his personal preferences.

Either way, his decision to describe the word in a particular way apparently led to a prescriptive feedback loop: people chose to use the spelling gray because it was in the dictionary, reinforcing its position as the main entry in the dictionary and leading to its ascendancy over grey and eventually to the AP Stylebook‘s tweet about its preferred status. What may have started as a value-neutral decision by Webster about an utterly inconsequential issue of spelling variability has become an imperative to editors . . . about what is still an utterly inconsequential issue of spelling variability.

Personally, I’ve always had a soft spot for grey.

By

Scriptivists Revisited

Before I begin: I know—it’s been a terribly, horribly, unforgivably long time since my last post. Part of it is that I’m often busy with grad school and work and family, and part of it is that I’ve been thinking an awful lot lately about prescriptivism and descriptivism and linguists and editors and don’t really know where to begin.

I know that I’ve said some harsh things about prescriptivists before, but I don’t actually hate prescriptivism in general. As I’ve said before, prescriptivism and descriptivism are not really diametrically opposed, as some people believe they are. Stan Carey explores some of the common ground between the two in a recent post, and I think there’s a lot more to be said about the issue.

I think it’s possible to be a descriptivist and prescriptivist simultaneously. In fact, I think it’s difficult if not impossible to fully disentangle the two approaches. The fact is that many or most prescriptive rules are based on observed facts about the language, even though those facts may be incomplete or misunderstood in some way. Very seldom does anyone make up a rule out of whole cloth that bears no resemblance to reality. Rules often arise because someone has observed a change or variation in the language and is seeking to slow or reverse that change (as in insisting that “comprised of” is always an error) or to regularize the variation (as in insisting that “which” be used for nonrestrictive relative clauses and “that” for restrictive ones).

One of my favorite language blogs, Motivated Grammar, declares “Prescriptivism must die!” but to be honest, I’ve never quite been comfortable with that slogan. Now, I love a good debunking of language myths as much as the next guy—and Gabe Doyle does a commendable job of it—but not all prescriptivism is a bad thing. The impulse to identify and fix potential problems with the language is a natural one, and it can be used for both good and ill. Just take a look at the blogs of John E. McIntyre, Bill Walsh, and Jan Freeman for examples of well-informed, sensible language advice. Unfortunately, as linguists and many others know, senseless language advice is all too common.

Linguists often complain about and debunk such bad language advice—and rightly so, in my opinion—but I think in doing so they often make the mistake of dismissing prescriptivism altogether. Too often linguists view prescriptivism as an annoyance to be ignored or as a rival approach that must be quashed, but either way they miss the fact that prescriptivism is a metalinguistic phenomenon worth exploring and understanding. And why is it worth exploring? Because it’s an essential part of how ordinary speakers—and even linguists—use language in their daily lives, whether they realize it or not.

Contrary to what a lot of linguists say, language isn’t really a natural phenomenon—it’s a learned behavior. And as with any other human behavior, we generally strive to make our language match observed standards. Or as Emily Morgan so excellently says in a guest post on Motivated Grammar, “Language is something that we as a community of speakers collectively create and reinvent each time we speak.” She says that this means that language is “inextricably rooted in a descriptive generalization about what that community does,” but it also means that it is rooted in prescriptive notions of language. Because when speakers create and reinvent language, they do so by shaping their language to fit listeners’ expectations.

That is, for the most part, there’s no difference in speakers’ minds between what they should do with language and what they do do with language. They use language the way they do because they feel as though they should, and this in turn reinforces the model that influences everyone else’s behavior. I’ve often reflected on the fact that style guides like The Chicago Manual of Style will refer to dictionaries for spelling issues—thus prescribing how to spell—but these dictionaries simply describe the language found in edited writing. Description and prescription feed each other in an endless loop. This may not be mathematical logic, but it is a sort of logic nonetheless. Philosophers love to say that you can’t derive an ought from an is, and yet people do nonetheless. If you want to fit in with a certain group, then you should behave in a such a way as to be accepted by that group, and that group’s behavior is simply an aggregate of the behaviors of everyone else trying to fit in.

And at this point, linguists are probably thinking, “And people should be left alone to behave the way they wish to behave.” But leaving people alone means letting them decide which behaviors to favor and which to disfavor—that is, which rules to create and enforce. Linguists often criticize those who create and propagate rules, as if such rules are bad simply as a result of their artificiality, but, once again, the truth is that all language is artificial; it doesn’t exist until we make it exist. And if we create it, why should we always be coolly dispassionate about it? Objectivity might be great in the scientific study of language, but why should language users approach language the same way? Why should we favor “natural” or “spontaneous” changes and yet disfavor more conscious changes?

This is something that Deborah Cameron addresses in her book Verbal Hygiene (which I highly, highly recommend)—the notion that “spontaneous” or “natural” changes are okay, while deliberate ones are meddlesome and should be resisted. As Cameron counters, “If you are going to make value judgements at all, then surely there are more important values than spontaneity. How about truth, beauty, logic, utility?” (1995, 20). Of course, linguists generally argue that an awful lot of prescriptions do nothing to create more truth, beauty, logic, or utility, and this is indeed a problem, in my opinion.

But when linguists debunk such spurious prescriptions, they miss something important: people want language advice from experts, and they’re certainly not getting it from linguists. The industry of bad language advice exists partly because the people who arguably know the most about how language really works—the linguists—aren’t at all interested in giving advice on language. Often they take the hands-off attitude exemplified in Robert Hall’s book Leave Your Language Alone, crying, “Linguistics is descriptive, not prescriptive!” But in doing so, linguists are nonetheless injecting themselves into the debate rather than simply observing how people use language. If an objective, hands-off approach is so valuable, then why don’t linguists really take their hands off and leave prescriptivists alone?

I think the answer is that there’s a lot of social value in following language rules, whether or not they are actually sensible. And linguists, being the experts in the field, don’t like ceding any social or intellectual authority to a bunch of people that they view as crackpots and petty tyrants. They chafe at the idea that such ill-informed, superstitious advice—what Language Log calls “prescriptivist poppycock”—can or should have any value at all. It puts informed language users in the position of having to decide whether to follow a stupid rule so as to avoid drawing the ire of some people or to break the rule and thereby look stupid to those people. Arnold Zwicky explores this conundrum in a post titled “Crazies Win.”

Note something interesting at the end of that post: Zwicky concludes by giving his own advice—his own prescription—regarding the issue of split infinitives. Is this a bad thing? No, not at all, because prescriptivism is not the enemy. As John Algeo said in an article in College English, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense” (“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 [December 1969]: 276). As I’ve said before, the nonsense is abundant. Just look at this awful Reader’s Digest column or this article on a Monster.com site for teachers for a couple recent examples.

Which brings me back to a point I’ve made before: linguists need to be more involved in not just educating the public about language, but in giving people the sensible advice they want. Trying to kill prescriptivism is not the answer to the language wars, and truly leaving language alone is probably a good way to end up with a dead language. Exploring it and trying to figure out how best to use it—this is what keeps language alive and thriving and interesting. And that’s good for prescriptivists and descriptivists alike.

By

Linguists and Straw Men

Sorry I haven’t posted in so long (I know I say that a lot)—I’ve been busy with school and things. Anyway, a couple months back I got a comment on an old post of mine, and I wanted to address it. I know it’s a bit lame to respond to two-month-old comments, but it was on a two-year-old post, so I figure it’s okay.

The comment is here, under a post of mine entitled “Scriptivists”. I believe the comment is supposed to be a rebuttal of that post, but I’m a little confused by the attempt. The commenter apparently accuses me of burning straw men, but ironically, he sets up a massive straw man of his own.

His first point seems to make fun of linguists for using technical terminology, but I’m not sure what that really proves. After all, technical terminology allows you to be very specific about abstract or complicated issues, so how is that really a criticism? I suppose it keeps a lot of laypeople from understanding what you’re saying, but if that’s the worst criticism you’ve got, then I guess I’ve got to shrug my shoulders and say, “Guilty as charged.”

The second point just makes me scratch my head. Using usage evidence from the greatest writers is a bad thing now? Honestly, how do you determine what usage features are good and worthy of emulation if not by looking to the most respected writers in the language?

The last point is just stupid. How often do you see Geoffrey Pullum or Languagehat or any of the other linguistics bloggers whipping out the fact that they have graduate degrees?

And I must disagree with Mr. Kevin S. that the “Mrs. Grundys” of the world don’t actually exist. I’ve heard too many stupid usage superstitions being perpetuated today and seen too much Strunk & White worship to believe that that sort of prescriptivist is extinct. Take, for example, Sonia Sotomayor, who says that split infinities make her “blister”. Or take one of my sister-in-law’s professors, who insisted that her students could not use the following features in their writing:

  • The first person
  • The passive voice
  • Phrases like “this paper will show . . .” or “the data suggest . . .” because, according to her, papers are not capable of showing and data is not capable of suggesting.

How, exactly, are you supposed to write an academic paper without resorting to one of those devices—none of which, by the way, are actually wrong—at one time or another? These proscriptions were absolutely nonsensical, supported by neither logic nor usage nor common sense.

There’s still an awful lot of absolute bloody nonsense coming from the prescriptivists of the world. (Of course, this is not to say that all or even most prescriptivists are like this; take, for example, the inimitable John McIntyre, who is one of the most sensible and well-informed prescriptivists I’ve ever encountered.) And sorry to say, I don’t see the same sort of stubborn and ill-informed arguments coming from the descriptivists’ camp. And I’m pretty sure I’ve never seen a descriptivist who resembled the straw man that Kevin S. constructed.

By

Rules Are Rules

Recently I was involved in an online discussion about the pronunciation of the word the before vowels. Someone wanted to know if it was pronounced /ði/ (“thee”) before vowels only in singing, or if it was a general rule of speech as well. His dad had said it was a rule, but he had never heard it before and wondered if maybe it was more of a convention than a rule. Throughout the conversation, several more people expressed similar opinions—they’d never heard this rule before and they doubted whether it was really a rule at all.

There are a few problems here. First of all, not everybody means exactly the same thing when they talk about rules. It’s like when laymen dismiss evolution because it’s “just a theory.” They forget that gravity is also just a theory. And when laymen talk about linguistic rules, they usually mean prescriptive rules. Prescriptive rules usually state that a particular thing should be done, which typically implies that it often isn’t done.

But when linguists talk about rules, they mean descriptive ones. Think of it this way: if you were going to teach a computer how to speak English fluently, what would it need to know? Well, one tiny little detail that it would need to know is that the word the is pronounced with a schwa (/ðə/) except when it is stressed or followed by a vowel. Nobody needs to be taught this rule, except for non-native speakers, because we all learn it by hearing it when we’re children. And thus it follows that it’s never taught in English class, so it throws some people for a bit of a loop when they heard it called a rule.

But even on the prescriptivist side of things, not all rules are created equal. There are a lot of rules that are generally covered in English classes, and they’re usually taught as simple black-and-white declarations: x is right and y is wrong. When people ask me questions about language, they usually seem to expect answers along these lines. Many issues of grammar and usage are complicated and have no clear right wrong answer. Same with style—open up two different style guides, and you’ll often find two (or more) ways to punctuate, hyphenate, and capitalize. A lot of times these things boil down to issues of formality, context, and personal taste.

Unfortunately, most of us hear language rules expressed as inviolable laws all the way through public school and probably into college. It’s hard to overcome a dozen years or more of education on a subject and start to learn that maybe things aren’t as simple as you’ve been told, that maybe those trusted authorities and gatekeepers of the language, the English teachers, were not always well-informed. But as writing becomes more and more important in modern life, it likewise becomes more important to teach people meaningful, well-founded rules that aren’t two centuries old. It’s time for English class to get educated.

By

How I Became a Descriptivist

Believe it or not, I wasn’t always the grammar free-love hippie that I am now. I actually used to be known as quite a grammar nazi. This was back in my early days as an editor (during my first year or two of college) when I was learning lots of rules about grammar and usage and style, but before I had gotten into my major classes in English language, which introduced me to a much more descriptivist approach.

It was a gradual progression, starting with my class in modern American usage. Our textbook was Merriam-Webster’s Dictionary of English Usage, which is a fantastic resource for anyone interested in editing or the English language in general. The class opened my eyes to the complexities of usage issues and made me realize that few issues are as black-and-white as most prescriptivists would have you believe. And this was in a class in the editing minor of all places.

My classes in the English language major did even more to change my opinions about prescriptivism and descriptivism. Classes in Old English and the history of the English language showed me that although the language has changed dramatically over the centuries, it has never fallen into a state of chaos and decay. There has been clear, beautiful, compelling writing in every stage of the language (well, as long as there have been literate Anglo-Saxons, anyway).

But I think the final straw was annoyance with a lot of my fellow editors. Almost none of them seemed interested in doing anything other than following the strictures laid out in style guides and usage manuals (Merriam-Webster’s Dictionary of English Usage was somehow exempt from reference). And far too often, the changes they made did nothing to improve the clarity, readability, or accuracy of the text. Without any depth of knowledge about the issues, they were left without the ability to make informed judgements about what should be changed.

In fact, I would say that you can’t be a truly great editor unless you learn to approach things from a descriptivist perspective. And in the end, you’re still deciding how the text should be instead of simply talking about how it is, so you haven’t fully left prescriptivism behind. But it will be an informed prescriptivism, based on facts about current and historical usage, with a healthy dose of skepticism towards the rhetoric coming from the more fundamentalist prescriptivists.

And best of all, you’ll find that the sky won’t fall and the language won’t rapidly devolve into caveman grunts just because you stopped correcting all the instances of figurative over to more than. Everybody wins.

By

Scriptivists

The dispute between prescriptivism and descriptivism has sometimes been described as “a war that never ends.” Indeed, it often seems that the two sides are locked in an eternal struggle at polar opposites of the debate, neither willing to yield an inch. The prescriptivists are striving to uphold time-honored standards and defend the language from decay; the descriptivists are trying to overthrow the system and allow linguistic chaos to rein in its place.

But is that really a true picture of the situation?

I have met one or two descriptivists who felt that any English sentence produced by a native speaker should be considered perfectly correct. I’ve also edited enough writing to firmly disagree with that notion. But by and large, the descriptivists I’ve known have not been the anything-goes types that the prescriptivists often make them out to be. They may oppose the grammar nazis, but they are not grammar anarchists or grammar free-love hippies; they’re more along the lines of grammar democrats, in my opinion.

If the argument over grammatical standards really is a war that never ends, as Mark Halpern says, then perhaps the primary impetus that keeps it going is the fact that it is such a poorly defined conflict. Both sides have misrepresented the other, though from my perspective it seems that it is the descriptivists who are most misunderstood.

And though both sides will often make more moderate, conciliatory statements like “Well, of course there should be some sort of standard” or “Well, of course language changes and the rules need to change with it,” I’ve never seen editors and linguists sit down together and figure out just how much they really agree on. I think there are many instances where a prescriptivist might say, “English should be x,” and a descriptivist would say, “English is x” ; that is, they’re agreeing on an aspect of the language, even if they’re approaching it from different angles.

The debate, of course, arises from those areas in which the descriptivist says, “English is x,” and the prescriptivist says, “Yeah, but it should be y.” But I’ve never gotten a satisfactory answer when I’ve asked, “But why should it be y?” And this, I think, is where prescriptivism goes astray.

Mark Halpern says, “Arbitrary laws—conventions—are just the ones that need enforcement, not the natural laws. The law of gravity can take care of itself; the law that you go on green and stop on red needs all the help it can get.” Reading this, I can’t help but wonder what sorts of linguistic car accidents or traffic jams would occur if we abandoned all of our arbitrary prescriptions. Does language really need our help, or can it take care of itself, too?

If language does need help—and I think that in areas like spelling and punctuation, it clearly does—how much does it need? How much does the strict separation between less and fewer contribute to the laudable goal of a standard form of the language? What about the proscription against they as an indefinite singular pronoun?

How often do prescriptivist rules really help anyone, and how often do they simply cultivate an air of disdain for those who don’t follow the rules? Mark Halpern says that nobody cares about split infinitives or ain’t anymore, but this is far from the truth. I’ve known too many editors and language buffs, read too many internet discussions about linguistic pet peeves to believe that.

Far too often, prescriptions serve not to facilitate the smooth and orderly flow of traffic but to impose regulations on a system that got by just fine for centuries without them. And far too often, prescriptivism serves only to create a class of self-appointed grammar police and to make those who can’t remember the arbitrary conventions self-conscious and insecure about their language.

The truth is this: as long as prescriptivism reigns, there will be an awful lot of arrant pedants in the world. And as long as descriptivists are falling down on the job of educating society about language, prescriptivists will never understand that change is not degeneration and that freedom is not anarchy.

%d bloggers like this: