Arrant Pedantry

By

Who, That, and the Nature of Bad Rules

A couple of weeks ago the venerable John E. McIntyre blogged about a familiar prescriptive bugbear, the question of that versus who(m). It all started on the blog of the Society for the Promotion of Good Grammar, where a Professor Jacoby, a college English professor, wrote in to share his justification for the rule, which is that you should avoid using that which human referents because it depersonalizes them. He calls this justification “quite profound,” which is probably a good sign that it’s not. Mr. McIntyre, ever the reasonable fellow, tried to inject some facts into the conversation, but apparently to no avail.

What I find most interesting about the whole discussion, however, is not the argument over whether that can be used with human referents, but what the whole argument says about prescriptivism and the way we talk about language and rules. (Indeed, the subject has already been covered very well by Gabe Doyle at Motivated Grammar, who made some interesting discoveries about relative pronoun usage that may indicate some cognitive motivation.) Typically, the person putting forth the rule assumes a priori that the rule is valid, and thereafter it seems that no amount of evidence or argument can change their mind. The entire discussion at the SPOGG blog proceeds without any real attempts to address Mr. McIntyre’s points, and it ends with the SPOGG correspondent who originally kicked off the discussion sullenly taking his football and going home.

James Milroy, an emeritus professor of sociolinguistics at the University of Michigan, once wrote that all rationalizations for prescriptions are post hoc; that is, the rules are taken to be true, and the justifications come afterward and really only serve to give the rule the illusion of validity:

Indeed all prescriptive arguments about correctness that depend on intra-linguistic factors are post-hoc rationalizations. . . . But an intra-linguistic rationalization is not the reason why some usages are believed to be wrong. The reason is that it is simply common sense: everybody knows it, it is part of the culture to know it, and you are an outsider if you think otherwise: you are not a participant in the common culture, and so your views can be dismissed. To this extent, linguists who state that I seen it is not ungrammatical are placing themselves outside the common culture.[1]

This may sound like a rather harsh description of prescriptivism, but I think there’s a lot of truth to it—especially the part about linguists unwittingly setting themselves outside of the culture. Linguists try to play the part of the boy who pointed out that the emperor has no clothes, but instead of breaking the illusion they are at best treated as suspect for not playing along. But the point linguists are trying to make isn’t that there’s no such thing as right or wrong in language (though there are some on the fringe who would make such claims)—they’re simply trying to point out that, quite frequently, the justifications are phony and attention to facts and evidence is mostly nonexistent. There are no real axioms or first principles from which prescriptive rules follow—at least, there don’t seem to be any that are consistently applied and followed to their logical conclusions. Instead the canon of prescriptions is a hodgepodge of style and usage opinions that have been passed down and are generally assumed to have the force of law. There are all kinds of unexamined assumptions packaged into prescriptions and their justifications, such as the following from Professor Jacoby:

  • Our society has a tendency to depersonalize people.
  • Depersonalizing people is bad.
  • Using that as a relative pronoun with human referents depersonalizes them.

There are probably more, but that covers the bases. Note that even if we agree that our society depersonalizes people and that this is a bad thing, it’s still quite a leap from this to the claim that that depersonalizes people. But, as Milroy argued, it’s not really about the justification. It’s about having a justification. You can go on until you’re blue in the face about the history of English relative pronoun usage (for instance, that demonstrative pronouns like that were the only option in Old English, and that this has changed several times over the last millennium and a half, and that it’s only recently that people have begun to claim that that with people is wrong) or about usage in other, related languages (such as German, which uses demonstrative pronouns as relative pronouns), but it won’t make any difference; at best, the person arguing for the rule will superficially soften their stance and make some bad analogies to fashion or ethics, saying that while it might not be a rule, it’s still a good guideline, especially for novices. After all, novices need rules that are more black and white—they need to use training wheels for a while before they can ride unaided. Too bad we also never stop to ask whether we’re actually providing novices with training wheels or just putting sticks in their spokes.

Meanwhile, prescriptivists frequently dismiss all evidence for one reason or another: It’s well established in the history of usage? Well, that just shows that people have always made mistakes. It’s even used by greats like Chaucer, Shakespeare, and other literary giants? Hey, even the greats make mistakes. Either that or they mastered the rules and thus know when it’s okay to break them. People today overwhelmingly break the rule? Well, that just shows how dire the situation is. You literally can’t win, because, as Geoffrey Pullum puts it, “nothing is relevant.”

So if most prescriptions are based on unexamined assumptions and post hoc rationalizations, where does that leave things? Do we throw it all out because it’s a charade? That seems rather extreme. There will always be rules, because that’s simply the nature of people. The question is, how do we establish which rules are valid, and how do we teach this to students and practice it as writers and editors? Honestly, I don’t know, but I know that it involves real research and a willingness to critically evaluate not only the rules but also the assumptions that underlie them. We have to stop having a knee-jerk reaction against linguistic methods and allow them to inform our understanding. And linguists need to learn that rules are not inherently bad. Indeed, as John Algeo put it, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense.”[2]

  1. [1] James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.
  2. [2] “Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.

By

Scriptivists Revisited

Before I begin: I know—it’s been a terribly, horribly, unforgivably long time since my last post. Part of it is that I’m often busy with grad school and work and family, and part of it is that I’ve been thinking an awful lot lately about prescriptivism and descriptivism and linguists and editors and don’t really know where to begin.

I know that I’ve said some harsh things about prescriptivists before, but I don’t actually hate prescriptivism in general. As I’ve said before, prescriptivism and descriptivism are not really diametrically opposed, as some people believe they are. Stan Carey explores some of the common ground between the two in a recent post, and I think there’s a lot more to be said about the issue.

I think it’s possible to be a descriptivist and prescriptivist simultaneously. In fact, I think it’s difficult if not impossible to fully disentangle the two approaches. The fact is that many or most prescriptive rules are based on observed facts about the language, even though those facts may be incomplete or misunderstood in some way. Very seldom does anyone make up a rule out of whole cloth that bears no resemblance to reality. Rules often arise because someone has observed a change or variation in the language and is seeking to slow or reverse that change (as in insisting that “comprised of” is always an error) or to regularize the variation (as in insisting that “which” be used for nonrestrictive relative clauses and “that” for restrictive ones).

One of my favorite language blogs, Motivated Grammar, declares “Prescriptivism must die!” but to be honest, I’ve never quite been comfortable with that slogan. Now, I love a good debunking of language myths as much as the next guy—and Gabe Doyle does a commendable job of it—but not all prescriptivism is a bad thing. The impulse to identify and fix potential problems with the language is a natural one, and it can be used for both good and ill. Just take a look at the blogs of John E. McIntyre, Bill Walsh, and Jan Freeman for examples of well-informed, sensible language advice. Unfortunately, as linguists and many others know, senseless language advice is all too common.

Linguists often complain about and debunk such bad language advice—and rightly so, in my opinion—but I think in doing so they often make the mistake of dismissing prescriptivism altogether. Too often linguists view prescriptivism as an annoyance to be ignored or as a rival approach that must be quashed, but either way they miss the fact that prescriptivism is a metalinguistic phenomenon worth exploring and understanding. And why is it worth exploring? Because it’s an essential part of how ordinary speakers—and even linguists—use language in their daily lives, whether they realize it or not.

Contrary to what a lot of linguists say, language isn’t really a natural phenomenon—it’s a learned behavior. And as with any other human behavior, we generally strive to make our language match observed standards. Or as Emily Morgan so excellently says in a guest post on Motivated Grammar, “Language is something that we as a community of speakers collectively create and reinvent each time we speak.” She says that this means that language is “inextricably rooted in a descriptive generalization about what that community does,” but it also means that it is rooted in prescriptive notions of language. Because when speakers create and reinvent language, they do so by shaping their language to fit listeners’ expectations.

That is, for the most part, there’s no difference in speakers’ minds between what they should do with language and what they do do with language. They use language the way they do because they feel as though they should, and this in turn reinforces the model that influences everyone else’s behavior. I’ve often reflected on the fact that style guides like The Chicago Manual of Style will refer to dictionaries for spelling issues—thus prescribing how to spell—but these dictionaries simply describe the language found in edited writing. Description and prescription feed each other in an endless loop. This may not be mathematical logic, but it is a sort of logic nonetheless. Philosophers love to say that you can’t derive an ought from an is, and yet people do nonetheless. If you want to fit in with a certain group, then you should behave in a such a way as to be accepted by that group, and that group’s behavior is simply an aggregate of the behaviors of everyone else trying to fit in.

And at this point, linguists are probably thinking, “And people should be left alone to behave the way they wish to behave.” But leaving people alone means letting them decide which behaviors to favor and which to disfavor—that is, which rules to create and enforce. Linguists often criticize those who create and propagate rules, as if such rules are bad simply as a result of their artificiality, but, once again, the truth is that all language is artificial; it doesn’t exist until we make it exist. And if we create it, why should we always be coolly dispassionate about it? Objectivity might be great in the scientific study of language, but why should language users approach language the same way? Why should we favor “natural” or “spontaneous” changes and yet disfavor more conscious changes?

This is something that Deborah Cameron addresses in her book Verbal Hygiene (which I highly, highly recommend)—the notion that “spontaneous” or “natural” changes are okay, while deliberate ones are meddlesome and should be resisted. As Cameron counters, “If you are going to make value judgements at all, then surely there are more important values than spontaneity. How about truth, beauty, logic, utility?” (1995, 20). Of course, linguists generally argue that an awful lot of prescriptions do nothing to create more truth, beauty, logic, or utility, and this is indeed a problem, in my opinion.

But when linguists debunk such spurious prescriptions, they miss something important: people want language advice from experts, and they’re certainly not getting it from linguists. The industry of bad language advice exists partly because the people who arguably know the most about how language really works—the linguists—aren’t at all interested in giving advice on language. Often they take the hands-off attitude exemplified in Robert Hall’s book Leave Your Language Alone, crying, “Linguistics is descriptive, not prescriptive!” But in doing so, linguists are nonetheless injecting themselves into the debate rather than simply observing how people use language. If an objective, hands-off approach is so valuable, then why don’t linguists really take their hands off and leave prescriptivists alone?

I think the answer is that there’s a lot of social value in following language rules, whether or not they are actually sensible. And linguists, being the experts in the field, don’t like ceding any social or intellectual authority to a bunch of people that they view as crackpots and petty tyrants. They chafe at the idea that such ill-informed, superstitious advice—what Language Log calls “prescriptivist poppycock”—can or should have any value at all. It puts informed language users in the position of having to decide whether to follow a stupid rule so as to avoid drawing the ire of some people or to break the rule and thereby look stupid to those people. Arnold Zwicky explores this conundrum in a post titled “Crazies Win.”

Note something interesting at the end of that post: Zwicky concludes by giving his own advice—his own prescription—regarding the issue of split infinitives. Is this a bad thing? No, not at all, because prescriptivism is not the enemy. As John Algeo said in an article in College English, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense” (“Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 [December 1969]: 276). As I’ve said before, the nonsense is abundant. Just look at this awful Reader’s Digest column or this article on a Monster.com site for teachers for a couple recent examples.

Which brings me back to a point I’ve made before: linguists need to be more involved in not just educating the public about language, but in giving people the sensible advice they want. Trying to kill prescriptivism is not the answer to the language wars, and truly leaving language alone is probably a good way to end up with a dead language. Exploring it and trying to figure out how best to use it—this is what keeps language alive and thriving and interesting. And that’s good for prescriptivists and descriptivists alike.

%d bloggers like this: