Arrant Pedantry

By

Does Prescriptivism Have Moral Worth?

I probably shouldn’t be getting into this again, but I think David Bentley Hart’s latest post on language (a follow-up to the one I last wrote about) deserves a response. You see, even though he’s no longer cloaking his peeving with the it’s-just-a-joke-but-no-seriously defense, I think he’s still cloaking his arguments in something else: spurious claims about the nature of descriptivism and the rational and moral superiority of prescriptivism. John McIntyre has already taken a crack at these claims, and I think he’s right on: Hart’s description of descriptivists doesn’t match any descriptivists I know, and his claims about prescriptivism’s rational and moral worth are highly suspect.

Hart gets off to bad start when he says that “most of [his convictions] require no defense” and then says that “if you can find a dictionary that, say, allows ‘reluctant’ as a definition of ‘reticent,’ you will also find it was printed in Singapore under the auspices of ‘The Happy Luck Goodly Englishing Council.’” Even when he provides a defense, he’s wrong: the Oxford English Dictionary contains precisely that definition, sense 2: “Reluctant to perform a particular action; hesitant, disinclined. Chiefly with about, or to do something.” The first illustrative quotation is from 1875, only 50 years after the first quote for the traditionally correct definition: “The State registrar was just as reticent to give us information.” So much for the Happy Luck Goodly Englishing Council. (Oh, wait, let me guess—this is just another self-undermining flippancy.)

I’m glad that Hart avoids artificial rules such as the proscription against restrictive which and recognizes that “everyone who cares about such matters engages in both prescription and description, often confusing the two”—a point which many on both sides fail to grasp. But I’m disappointed when he says, “The real question, at the end of the day, is whether any distinction can be recognized, or should be maintained, between creative and destructive mutations,” and then utterly fails to address the question. Instead he merely defends his peeves and denigrates those who argue against his peeves without embracing the disputed senses themselves as hypocrites. But I don’t want to get embroiled in discussions about whether reticent to mean “reluctant” is right or wrong or has a long, noble heritage or is an ignorant vulgarism—that’s all beside the point and doesn’t get to the claims Hart employs to justify his peeves.

But near the end, he does say that his “aesthetic prejudice” is also a “coherent principle” because “persons can mean only what they have the words to say, and so the finer our distinctions and more precise our definitions, the more we are able to mean.” On the surface this may seem like a nice sentiment, but I don’t think it’s nearly as coherent as Hart would like to think. First of all, it smacks of the Whorfian hypothesis, the idea that words give you the power to mean things that you couldn’t otherwise mean. I’m fairly confident I could mean “disinclined to speak” even if the word reticent were nonexistent. (Note that even if the “relucant” meaning completely overtakes the traditional one, we’ll still have words like reserved and taciturn.) Furthermore, it’s possible that certain words lose their original meanings because they weren’t very useful meanings to begin with. Talking about the word decimate for example, Jan Freeman says, “We don’t especially need a term that means ‘kill one in 10.’” So even if we accept the idea that preserving distinctions is a good thing, we need to ask whether this distinction is a boon to the language and its speakers.

And if defending fine distinctions and precise definitions is such a noble cause, why don’t prescriptivists scour the lexicon for distinctions that can be made finer and definitions that can be made more precise? Why don’t we busy ourselves with coining new words to convey new meanings that would be useful to English speakers? Hart asks whether there can be creative mutations, but he never gives an example of one or even speculates on what one might look like. Perhaps to him all mutations are destructive. Or perhaps there’s some unexplained reason why defending existing meanings is noble but creating new ones is not. Hart never says.

At the end of the day, my question is whether there really is any worth to prescriptivism. Have the activities of prescriptivists actually improved our language—or at least kept it from degenerating—or is it just an excuse to rail against people for their lexical ignorance? Sometimes, when I read articles like Hart’s, I’m inclined to think it’s the latter. I don’t see how his litany of peeves contributes much to the “clarity, precision, subtlety, nuance, and poetic richness” of language, and I think his warning against the “leveling drabness of mass culture” reveals his true intent—he wants to maintain an aristocratic language for himself and other like-minded individuals.

But I don’t think this is what prescriptivism really is, or at least not what it should be. So does prescriptivism have value? I think so, but I’m not entirely sure what it is. To be honest, I’m still sorting out my feelings about prescriptivism. I know I frequently rail against bad prescriptivism, but I certainly don’t think all prescriptivism is bad. I get paid to be a prescriber at work, where it’s my job to clean up others’ prose, but I try not to let my own pet peeves determine my approach to language. I know this looks like I’m doing exactly what I criticized Hart for doing—raising a question and then dodging it—but I’m still trying to find the answer myself. Perhaps I’ll get some good, thoughtful comments on the issue. Perhaps I just need more time to mull it over and sort out my feelings. At any rate, this post is already too long, so I’ll have to leave it for another time.

By

It’s just a joke. But no, seriously.

I know I just barely posted about the rhetoric of prescriptivism, but it’s still on my mind, especially after the recent post by David Bentley Hart and the responses by response by John E. McIntyre (here and here) and Robert Lane Greene. I know things are just settling down, but my intent here is not to throw more fuel on the fire, but to draw attention to what I believe is a problematic trend in the rhetoric of prescriptivism. Hart claims that his piece is just some light-hearted humor, but as McIntyre, Greene, and others have complained, it doesn’t really feel like humor.

That is, while it is clear that Hart doesn’t really believe that the acceptance of solecisms leads to the acceptance of cannibalism, it seems that he really does believe that solecisms are a serious problem. Indeed, Hart says, “Nothing less than the future of civilization itself is at issue—honestly—and I am merely doing my part to stave off the advent of an age of barbarism.” If it’s all a joke, as he says, then this statement is somewhat less than honest. And as at least one person says in the comments, Hart’s style is close to self-parody. (As an intellectual exercise, just try to imagine what a real parody would look like.) Perhaps I’m just being thick, but I can only see two reasons for such a style: first, it’s a genuine parody designed to show just how ridiculous the peevers are, or second, it’s a cover for genuine peeving.

I’ve seen this same phenomenon at work in the writings of Lynne Truss, Martha Brockenbrough, and others. They make some ridiculously over-the-top statements about the degenerate state of language today, they get called on it, and then they or their supporters put up the unassailable defense: It’s just a joke, see? Geez, lighten up! Also, you’re kind of a dimwit for not getting it.

That is, not only is it a perfect defense for real peeving, but it’s a booby-trap for anyone who dares to criticize the peever—by refusing to play the game, they put themselves firmly in the out group, while the peeve-fest typically continues unabated. But as Arnold Zwicky once noted, the “dead-serious advocacy of what [they take] to be the standard rules of English . . . makes the just-kidding defense of the enterprise ring hollow.” But I think it does more than just that: I think it undermines the credibility of prescriptivism in general. Joking or not, the rhetoric is polarizing and admits of no criticism. It reinforces the notion that “Discussion is not part of the agenda of the prescriptive grammarian.”[1] It makes me dislike prescriptivism in general, even though I actually agree with several of Hart’s points of usage.

As I said above, the point of this post was not to reignite a dying debate between Hart and his critics, but to draw attention to what I think is a serious problem surrounding the whole issue. In other words, I may not be worried about the state of the language, but I certainly am worried about the state of the language debate.

  1. [1] James Milroy, “The Consequences of Standardisation in Descriptive Linguistics,” in Standard English: The Widening Debate, ed. Tony Bex and Richard J. Watts (New York: Routledge, 1999), 21.

By

Who, That, and the Nature of Bad Rules

A couple of weeks ago the venerable John E. McIntyre blogged about a familiar prescriptive bugbear, the question of that versus who(m). It all started on the blog of the Society for the Promotion of Good Grammar, where a Professor Jacoby, a college English professor, wrote in to share his justification for the rule, which is that you should avoid using that which human referents because it depersonalizes them. He calls this justification “quite profound,” which is probably a good sign that it’s not. Mr. McIntyre, ever the reasonable fellow, tried to inject some facts into the conversation, but apparently to no avail.

What I find most interesting about the whole discussion, however, is not the argument over whether that can be used with human referents, but what the whole argument says about prescriptivism and the way we talk about language and rules. (Indeed, the subject has already been covered very well by Gabe Doyle at Motivated Grammar, who made some interesting discoveries about relative pronoun usage that may indicate some cognitive motivation.) Typically, the person putting forth the rule assumes a priori that the rule is valid, and thereafter it seems that no amount of evidence or argument can change their mind. The entire discussion at the SPOGG blog proceeds without any real attempts to address Mr. McIntyre’s points, and it ends with the SPOGG correspondent who originally kicked off the discussion sullenly taking his football and going home.

James Milroy, an emeritus professor of sociolinguistics at the University of Michigan, once wrote that all rationalizations for prescriptions are post hoc; that is, the rules are taken to be true, and the justifications come afterward and really only serve to give the rule the illusion of validity:

Indeed all prescriptive arguments about correctness that depend on intra-linguistic factors are post-hoc rationalizations. . . . But an intra-linguistic rationalization is not the reason why some usages are believed to be wrong. The reason is that it is simply common sense: everybody knows it, it is part of the culture to know it, and you are an outsider if you think otherwise: you are not a participant in the common culture, and so your views can be dismissed. To this extent, linguists who state that I seen it is not ungrammatical are placing themselves outside the common culture.[1]

This may sound like a rather harsh description of prescriptivism, but I think there’s a lot of truth to it—especially the part about linguists unwittingly setting themselves outside of the culture. Linguists try to play the part of the boy who pointed out that the emperor has no clothes, but instead of breaking the illusion they are at best treated as suspect for not playing along. But the point linguists are trying to make isn’t that there’s no such thing as right or wrong in language (though there are some on the fringe who would make such claims)—they’re simply trying to point out that, quite frequently, the justifications are phony and attention to facts and evidence is mostly nonexistent. There are no real axioms or first principles from which prescriptive rules follow—at least, there don’t seem to be any that are consistently applied and followed to their logical conclusions. Instead the canon of prescriptions is a hodgepodge of style and usage opinions that have been passed down and are generally assumed to have the force of law. There are all kinds of unexamined assumptions packaged into prescriptions and their justifications, such as the following from Professor Jacoby:

  • Our society has a tendency to depersonalize people.
  • Depersonalizing people is bad.
  • Using that as a relative pronoun with human referents depersonalizes them.

There are probably more, but that covers the bases. Note that even if we agree that our society depersonalizes people and that this is a bad thing, it’s still quite a leap from this to the claim that that depersonalizes people. But, as Milroy argued, it’s not really about the justification. It’s about having a justification. You can go on until you’re blue in the face about the history of English relative pronoun usage (for instance, that demonstrative pronouns like that were the only option in Old English, and that this has changed several times over the last millennium and a half, and that it’s only recently that people have begun to claim that that with people is wrong) or about usage in other, related languages (such as German, which uses demonstrative pronouns as relative pronouns), but it won’t make any difference; at best, the person arguing for the rule will superficially soften their stance and make some bad analogies to fashion or ethics, saying that while it might not be a rule, it’s still a good guideline, especially for novices. After all, novices need rules that are more black and white—they need to use training wheels for a while before they can ride unaided. Too bad we also never stop to ask whether we’re actually providing novices with training wheels or just putting sticks in their spokes.

Meanwhile, prescriptivists frequently dismiss all evidence for one reason or another: It’s well established in the history of usage? Well, that just shows that people have always made mistakes. It’s even used by greats like Chaucer, Shakespeare, and other literary giants? Hey, even the greats make mistakes. Either that or they mastered the rules and thus know when it’s okay to break them. People today overwhelmingly break the rule? Well, that just shows how dire the situation is. You literally can’t win, because, as Geoffrey Pullum puts it, “nothing is relevant.”

So if most prescriptions are based on unexamined assumptions and post hoc rationalizations, where does that leave things? Do we throw it all out because it’s a charade? That seems rather extreme. There will always be rules, because that’s simply the nature of people. The question is, how do we establish which rules are valid, and how do we teach this to students and practice it as writers and editors? Honestly, I don’t know, but I know that it involves real research and a willingness to critically evaluate not only the rules but also the assumptions that underlie them. We have to stop having a knee-jerk reaction against linguistic methods and allow them to inform our understanding. And linguists need to learn that rules are not inherently bad. Indeed, as John Algeo put it, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense.”[2]

  1. [1] James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.
  2. [2] “Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.