Arrant Pedantry

By

Winners!

After much deliberation, I have two winners for the Kindle 3G / You Are What You Speak giveaway contest. There were a lot of good suggestions that would have made great posts, though I felt unqualified or underqualified to tackle some of those topics myself. I might try to get to some of the non-winning topics if I have time.

So without further ado, here are the winners: second prize, a copy of Robert Lane Greene’s You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity, goes to Bob Scopatz for his topic suggestion of neuter pronouns. First prize goes to Erin Brenner for her suggestion (via Twitter), “How about a post on prescriptivism/descriptivism as a continuum rather than two sides? Why does it have to be either/or?” StackExchange allowed her to choose one of the new Kindle models, so Erin opted for a not-even-released-yet Kindle Fire. I’ll try to have a post on each topic within the next week or so.

Thanks for all those who submitted an idea, and special thanks again to Stack Exchange English Language and Usage for sponsoring. If you haven’t already, please go check out their site, as well as their new Linguistics site.

By

Contest Reminders

Just a reminder that my blog is currently competing in Grammar.net’s Best Grammar Blog of 2011 contest. Arrant Pedantry is currently in third. If you like my blog, please go vote.

Also, the deadline for submissions for my own contest sponsored by Stack Exchange English Language and Usage is fast approaching. Submit an idea for a future post here on Arrant Pedantry, and you’ll be entered to win either a new Kindle 3G or a copy of Robert Lane Greene’s You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity. Post a comment on that post or send me a tweet @ArrantPedantry. The last day for entries is September 30th.

By

It’s Not Wrong, but You Still Shouldn’t Do It

A couple of weeks ago, in my post “The Value of Prescriptivism,” I mentioned some strange reasoning that I wanted to talk about later—the idea that there are many usages that are not technically wrong, but you should still avoid them because other people think they’re wrong. I used the example of a Grammar Girl post on hopefully wherein she lays out the arguments in favor of disjunct hopefully and debunks some of the arguments against it—and then advises, “I still have to say, don’t do it.” She then adds, however, “I am hopeful that starting a sentence with hopefully will become more acceptable in the future.”

On the face of it, this seems like a pretty reasonable approach. Sometimes the considerations of the reader have to take precedence over the facts of usage. If the majority of your readers will object to your word choice, then it may be wise to pick a different word. But there’s a different way to look at this, which is that the misinformed opinions of a very small but very vocal subset of readers take precedence over the facts and the opinions of others. Arnold Zwicky wrote about this phenomenon a few years ago in a Language Log post titled “Crazies win”.

Addressing split infinitives and the equivocal advice to avoid them unless it’s better not to, Zwicky says that “in practice, [split infinitive as last resort] is scarcely an improvement over [no split infinitives] and in fact works to preserve the belief that split infinitives are tainted in some way.” He then adds that the “only intellectually justifiable advice” is to “say flatly that there’s nothing wrong with split infinitives and you should use them whenever they suit you”. I agree wholeheartedly, and I’ll explain why.

The problem with the it’s-not-wrong-but-don’t-do-it philosophy is that, while it feels like a moderate, open-minded, and more descriptivist approach in theory, it is virtually indistinguishable from the it’s-wrong-so-don’t-do-it philosophy in practice. You can cite all the linguistic evidence you want, but it’s still trumped by the fact that you’d rather avoid annoying that small subset of readers. It pays lip service to the idea of descriptivism informing your prescriptions, but the prescription is effectively the same. All you’ve changed is the justification for avoiding the usage.

Even more neutral and descriptive pieces like this New York Times “On Language” article on singular they ends with a wistful, “It’s a shame that grammarians ever took umbrage at the singular they,” adding, “Like it or not, the universal they isn’t universally accepted — yet. Its fate is now in the hands of the jury, the people who speak the language.” Even though the authors seem to be avoiding giving out advice, it’s still implicit in the conclusion. It’s great to inform readers about the history of usage debates, but what they’ll most likely come away with is the conclusion that it’s wrong—or at least tainted—so they shouldn’t use it.

The worst thing about this waffly kind of advice, I think, is that it lets usage commentators duck responsibility for influencing usage. They tell you all the reasons why it should be alright to use hopefully or split infinitives or singular they, but then they sigh and put them away in the linguistic hope chest, telling you that you can’t use them yet, but maybe someday. Well, when? If all the usage commentators are saying, “It’s not acceptable yet,” at what point are they going to decide that it suddenly is acceptable? If you always defer to the peevers and crazies, it will never be acceptable (unless they all happen to die off without transmitting their ideas to the next generation).

And furthermore, I’m not sure it’s a worthwhile endeavor to try to avoid offending or annoying anyone in your writing. It reminds me of Aesop’s fable of the man, the boy, and the donkey: people will always find something to criticize, so it’s impossible to behave (or write) in such a way as to always avoid criticism. As the old man at the end says, “Please all, and you will please none.” You can’t please everyone, so you have to make a choice: will you please the small but vocal peevers, or the more numerous reasonable people? If you believe there’s nothing technically wrong with hopefully or singular they, maybe you should stand by those beliefs instead of caving to the critics. And perhaps through your reasonable but firm advice and your own exemplary writing, you’ll help a few of those crazies come around.

By

Contests!

Topic Contest

I’m very pleased to announce the first-ever contest here at Arrant Pedantry, sponsored by the generous folks at Stack Exchange English Language and Usage. The first-prize winner will receive a new Kindle 3G.

A Word from Our Sponsor

Stack Exchange English Language and Usage is a collaborative, community-driven site focused on questions about grammar, etymology, usage, dialects, and other aspects of the English language. For example, you can ask about the pronunciation of the names of the letters of the alphabet, the appropriate use of the semicolon, or the factual basis for pirate speech (appropriate for yesterday’s Talk like a Pirate Day).

Stack Exchange English Language and Usage is a great resource for people looking for answers to those often obscure questions about language that we all have from time to time. Stack Exchange features an involved community of language experts, amateurs, and enthusiasts who are willing and able to tackle questions on a variety of topics. Please go check it out, and consider following StackEnglish on Twitter.

The Rules

And now on to business. To enter, submit a request for a future topic you’d like to see covered here on Arrant Pedantry. It can be a question about usage, etymology, how I can call myself an editor when I think a lot of the rules are bogus—whatever you want. (Keep it civil, of course). Post your request either in the comments below or on Twitter @ArrantPedantry. I’ll pick the two best suggestions and write a post on each of them. One lucky winner will receive the grand prize of a a new Kindle 3G; one slightly less lucky winner will receive a copy of Robert Lane Greene’s You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (on which I’ll try to write a review sometime soon).

The deadline for entries is Friday, September 30th. Only contestants in the continental US, Canada, and Western Europe are eligible. Employees of StackExchange and relatives of me are not eligible. Spread the word!

And while you’re at it, check out the limerick contest at Sentence First, also sponsored by Stack Exchange English Language and Usage.

Addendum: My blog is currently getting bombarded by spammers, so if your comment doesn’t go through for some reason, please let me know through the contact page or by direct message on Twitter.

Update: The contest is now closed to submissions. I’ll go over all of them and announce the winners soon.

Best Grammar Blog of 2011

As you may have noticed, my blog has been preselected as a finalist for Grammar.net’s Best Grammar Blog of 2011 contest. I’m up against some excellent grammar and language blogs, so I’m honored to have been chosen. Voting for this contest starts on September 26th and runs through October 17th. If you enjoy my blog, please go and vote!

By

What Is a Namesake?

I just came across the sentence “George A. Smith became the namesake for St. George, Utah” while editing. A previous editor had changed it to “In 1861 St. George, Utah, became the namesake of George A. Smith.” Slightly awkward wording aside, I preferred the unedited form. Apparently, though, this is an issue of divided usage, with some saying that a namesake is named after someone else, some saying that a namesake is someone after whom someone else is named, some saying that both are correct, and some saying that namesakes simply share the same name without one being named after the other.

But I’d like to get a better idea of which definitions are most common, so I’m putting up this nice little poll. Let me know your feelings on the matter, and feel free to explain your vote in the comments below.

[poll id="2"]

By

Smelly Grammar

Earlier today on Twitter, Mark Allen posted a link to this column on the Columbia Journalism Review’s website about a few points of usage. It begins with a familiar anecdote about dictionary maker Samuel Johnson and proceeds to analyze the grammar and usage of the exchange between him and an unidentified woman.

Pretty quickly, though, the grammatical analysis goes astray. The author says that in Johnson’s time, the proper use of smell was as an intransitive verb, hence Johnson’s gentle but clever reproach. But the woman did indeed use smell as an intransitive verb—note that she didn’t say “I smell you“—so that can’t possibly be the reason why Johnson objected to it. And furthermore, the OED gives both transitive and intransitive senses of the verb smell tracing back to the late 1100s and early 1200s.

Johnson’s own dictionary simply defines smell as “to perceive by the nose” but does not say anything about transitivity. But note that it only identifies the perception of smell and not the production of it. Johnson produced a smell; the lady perceived it. Perhaps this is what his repartee was about, not the verb’s transitivity but who its subject was. But even this doesn’t hold up against the evidence: the OED lists both the “perceive an odor” and “emit an odor” senses, dating to 1200 and 1175, respectively. And the more specific sense of “emit an unpleasant odor” dates to 1400. By Johnson’s day, English speakers had been saying “You smell” to mean “You stink” for at least three hundred years. Merriam-Webster’s Dictionary of English Usage says nothing on this point, though it’s possible that other usage guides have addressed it.

But perhaps the biggest problem with the story is that I can’t find an attestation of it earlier than 1950 in Google Books. (If you can find an earlier one, let me know in the comments.) This anecdote seems more like a modern fabrication about a spurious point of usage than a real story that encapsulates an example of language change. But the most disappointing thing about the Columbia Journalism Review piece is its sloppy grammatical analysis. Transitivity is a pretty basic concept in grammar, but the author consistently gets it wrong; she’s really talking about thematic roles. And the historical facts of usage don’t line up with the argument, either.

I’m sure some of you are thinking, “But you’re missing the point! The point is that good usage matters.” But my point is that the facts matter, too, and you can’t talk about good usage without being aware of the facts. You can’t come to a better understanding of the truth by combining apocryphal anecdotes with a little misguided grammatical analysis. The sad truth is that an awful lot of usage commentators really don’t understand the grammatical points on which they comment, and I think that’s unfortunate, because understanding those points gives one better tools with which to analyze real usage.

By

The Value of Prescriptivism

Last week I asked rather skeptically whether prescriptivism had moral worth. John McIntyre was interested by my question and musing in the last paragraph, and he took up the question (quite admirably, as always) and responded with his own thoughts on prescriptivism. What I see is in his post is neither a coherent principle nor an innately moral argument, as Hart argued, but rather a set of sometimes-contradictory principles mixed with personal taste—and I think that’s okay.

Even Hart’s coherent principle is far from coherent when you break it down. The “clarity, precision, subtlety, nuance, and poetic richness” that he touts are really a bundle of conflicting goals. Clear wording may come at the expense of precision, subtlety, and nuance. Subtlety may not be very clear or precise. And so on. And even if these are all worthy goals, there may be many more that are missing.

McIntyre notes several more goals for practical prescriptivists like editors, including effectiveness, respect for an author’s voice, consistency with a set house style, and consideration of reader reactions, which is a quagmire in its own right. As McIntyre notes, some readers may have fits when they see sentence-disjunct “hopefully”, while other readers may find workarounds like “it is to be hoped that” to be stilted.

Of course, any appeal to the preferences of the reader (which is, in a way, more of a construct than a real entity) still requires decision making: which readers are you appealing to? Many of those who give usage advice seem to defer to the sticklers and pedants, even when it can be shown that they’re pretty clearly wrong or at least holding to outdated and somewhat silly notions. Grammar Girl, for example, guides readers through the arguments for and against “hopefully”, repeatedly saying that she hopes it becomes acceptable someday (note how carefully she avoids using “hopefully” herself, even though she claims to support it) but ultimately shies away from the usage, saying that you should avoid it for now because it’s not acceptable yet. (I’ll write about the strange reasoning presented here some other time.)

But whether or not you give in to the pedants and cranks who write angry letters to lecture you on split infinitives and stranded prepositions, it’s still clear that there’s value in considering the reader’s wishes while writing and editing. The author wants to communicate something to an audience; the audience presumably wants to receive that communication. It’s in both parties’ best interests if that communication goes off without a hitch, which is where prescriptivism can come in.

As McIntyre already said, this doesn’t give you an instant answer to every question, it can give you some methods of gauging roughly how acceptable certain words or constructions are. Ben Yagoda provides his own “somewhat arbitrary metric” for deciding when to fight for a traditional meaning and when to let it go. But the key word here is “arbitrary”; there is no absolute truth in usage, no clear, authoritative source to which you can appeal to solve these questions.

Nevertheless, I believe the prescriptive motivation—the desire to make our language as good as it can be—is, at its core, a healthy one. It leads us to strive for clear and effective communication. It leads us to seek out good language to use as a model. And it slows language change and helps to ensure that writing will be more understandable to audiences that are removed spatially and temporally. But when you try to turn this into a coherent principle to instruct writers on individual points of usage, like transpire or aggravate or enormity, well, then you start running into trouble, because that approach favors fiat over reason and evidence. But I think that an interest in clear and effective language, tempered with a healthy dose of facts and an acknowledgement that the real truth is often messy, can be a boon to all involved.

By

Does Prescriptivism Have Moral Worth?

I probably shouldn’t be getting into this again, but I think David Bentley Hart’s latest post on language (a follow-up to the one I last wrote about) deserves a response. You see, even though he’s no longer cloaking his peeving with the it’s-just-a-joke-but-no-seriously defense, I think he’s still cloaking his arguments in something else: spurious claims about the nature of descriptivism and the rational and moral superiority of prescriptivism. John McIntyre has already taken a crack at these claims, and I think he’s right on: Hart’s description of descriptivists doesn’t match any descriptivists I know, and his claims about prescriptivism’s rational and moral worth are highly suspect.

Hart gets off to bad start when he says that “most of [his convictions] require no defense” and then says that “if you can find a dictionary that, say, allows ‘reluctant’ as a definition of ‘reticent,’ you will also find it was printed in Singapore under the auspices of ‘The Happy Luck Goodly Englishing Council.’” Even when he provides a defense, he’s wrong: the Oxford English Dictionary contains precisely that definition, sense 2: “Reluctant to perform a particular action; hesitant, disinclined. Chiefly with about, or to do something.” The first illustrative quotation is from 1875, only 50 years after the first quote for the traditionally correct definition: “The State registrar was just as reticent to give us information.” So much for the Happy Luck Goodly Englishing Council. (Oh, wait, let me guess—this is just another self-undermining flippancy.)

I’m glad that Hart avoids artificial rules such as the proscription against restrictive which and recognizes that “everyone who cares about such matters engages in both prescription and description, often confusing the two”—a point which many on both sides fail to grasp. But I’m disappointed when he says, “The real question, at the end of the day, is whether any distinction can be recognized, or should be maintained, between creative and destructive mutations,” and then utterly fails to address the question. Instead he merely defends his peeves and denigrates those who argue against his peeves without embracing the disputed senses themselves as hypocrites. But I don’t want to get embroiled in discussions about whether reticent to mean “reluctant” is right or wrong or has a long, noble heritage or is an ignorant vulgarism—that’s all beside the point and doesn’t get to the claims Hart employs to justify his peeves.

But near the end, he does say that his “aesthetic prejudice” is also a “coherent principle” because “persons can mean only what they have the words to say, and so the finer our distinctions and more precise our definitions, the more we are able to mean.” On the surface this may seem like a nice sentiment, but I don’t think it’s nearly as coherent as Hart would like to think. First of all, it smacks of the Whorfian hypothesis, the idea that words give you the power to mean things that you couldn’t otherwise mean. I’m fairly confident I could mean “disinclined to speak” even if the word reticent were nonexistent. (Note that even if the “relucant” meaning completely overtakes the traditional one, we’ll still have words like reserved and taciturn.) Furthermore, it’s possible that certain words lose their original meanings because they weren’t very useful meanings to begin with. Talking about the word decimate for example, Jan Freeman says, “We don’t especially need a term that means ‘kill one in 10.’” So even if we accept the idea that preserving distinctions is a good thing, we need to ask whether this distinction is a boon to the language and its speakers.

And if defending fine distinctions and precise definitions is such a noble cause, why don’t prescriptivists scour the lexicon for distinctions that can be made finer and definitions that can be made more precise? Why don’t we busy ourselves with coining new words to convey new meanings that would be useful to English speakers? Hart asks whether there can be creative mutations, but he never gives an example of one or even speculates on what one might look like. Perhaps to him all mutations are destructive. Or perhaps there’s some unexplained reason why defending existing meanings is noble but creating new ones is not. Hart never says.

At the end of the day, my question is whether there really is any worth to prescriptivism. Have the activities of prescriptivists actually improved our language—or at least kept it from degenerating—or is it just an excuse to rail against people for their lexical ignorance? Sometimes, when I read articles like Hart’s, I’m inclined to think it’s the latter. I don’t see how his litany of peeves contributes much to the “clarity, precision, subtlety, nuance, and poetic richness” of language, and I think his warning against the “leveling drabness of mass culture” reveals his true intent—he wants to maintain an aristocratic language for himself and other like-minded individuals.

But I don’t think this is what prescriptivism really is, or at least not what it should be. So does prescriptivism have value? I think so, but I’m not entirely sure what it is. To be honest, I’m still sorting out my feelings about prescriptivism. I know I frequently rail against bad prescriptivism, but I certainly don’t think all prescriptivism is bad. I get paid to be a prescriber at work, where it’s my job to clean up others’ prose, but I try not to let my own pet peeves determine my approach to language. I know this looks like I’m doing exactly what I criticized Hart for doing—raising a question and then dodging it—but I’m still trying to find the answer myself. Perhaps I’ll get some good, thoughtful comments on the issue. Perhaps I just need more time to mull it over and sort out my feelings. At any rate, this post is already too long, so I’ll have to leave it for another time.

By

It’s just a joke. But no, seriously.

I know I just barely posted about the rhetoric of prescriptivism, but it’s still on my mind, especially after the recent post by David Bentley Hart and the responses by response by John E. McIntyre (here and here) and Robert Lane Greene. I know things are just settling down, but my intent here is not to throw more fuel on the fire, but to draw attention to what I believe is a problematic trend in the rhetoric of prescriptivism. Hart claims that his piece is just some light-hearted humor, but as McIntyre, Greene, and others have complained, it doesn’t really feel like humor.

That is, while it is clear that Hart doesn’t really believe that the acceptance of solecisms leads to the acceptance of cannibalism, it seems that he really does believe that solecisms are a serious problem. Indeed, Hart says, “Nothing less than the future of civilization itself is at issue—honestly—and I am merely doing my part to stave off the advent of an age of barbarism.” If it’s all a joke, as he says, then this statement is somewhat less than honest. And as at least one person says in the comments, Hart’s style is close to self-parody. (As an intellectual exercise, just try to imagine what a real parody would look like.) Perhaps I’m just being thick, but I can only see two reasons for such a style: first, it’s a genuine parody designed to show just how ridiculous the peevers are, or second, it’s a cover for genuine peeving.

I’ve seen this same phenomenon at work in the writings of Lynne Truss, Martha Brockenbrough, and others. They make some ridiculously over-the-top statements about the degenerate state of language today, they get called on it, and then they or their supporters put up the unassailable defense: It’s just a joke, see? Geez, lighten up! Also, you’re kind of a dimwit for not getting it.

That is, not only is it a perfect defense for real peeving, but it’s a booby-trap for anyone who dares to criticize the peever—by refusing to play the game, they put themselves firmly in the out group, while the peeve-fest typically continues unabated. But as Arnold Zwicky once noted, the “dead-serious advocacy of what [they take] to be the standard rules of English . . . makes the just-kidding defense of the enterprise ring hollow.” But I think it does more than just that: I think it undermines the credibility of prescriptivism in general. Joking or not, the rhetoric is polarizing and admits of no criticism. It reinforces the notion that “Discussion is not part of the agenda of the prescriptive grammarian.”[1] It makes me dislike prescriptivism in general, even though I actually agree with several of Hart’s points of usage.

As I said above, the point of this post was not to reignite a dying debate between Hart and his critics, but to draw attention to what I think is a serious problem surrounding the whole issue. In other words, I may not be worried about the state of the language, but I certainly am worried about the state of the language debate.

  1. [1] James Milroy, “The Consequences of Standardisation in Descriptive Linguistics,” in Standard English: The Widening Debate, ed. Tony Bex and Richard J. Watts (New York: Routledge, 1999), 21.

By

Who, That, and the Nature of Bad Rules

A couple of weeks ago the venerable John E. McIntyre blogged about a familiar prescriptive bugbear, the question of that versus who(m). It all started on the blog of the Society for the Promotion of Good Grammar, where a Professor Jacoby, a college English professor, wrote in to share his justification for the rule, which is that you should avoid using that which human referents because it depersonalizes them. He calls this justification “quite profound,” which is probably a good sign that it’s not. Mr. McIntyre, ever the reasonable fellow, tried to inject some facts into the conversation, but apparently to no avail.

What I find most interesting about the whole discussion, however, is not the argument over whether that can be used with human referents, but what the whole argument says about prescriptivism and the way we talk about language and rules. (Indeed, the subject has already been covered very well by Gabe Doyle at Motivated Grammar, who made some interesting discoveries about relative pronoun usage that may indicate some cognitive motivation.) Typically, the person putting forth the rule assumes a priori that the rule is valid, and thereafter it seems that no amount of evidence or argument can change their mind. The entire discussion at the SPOGG blog proceeds without any real attempts to address Mr. McIntyre’s points, and it ends with the SPOGG correspondent who originally kicked off the discussion sullenly taking his football and going home.

James Milroy, an emeritus professor of sociolinguistics at the University of Michigan, once wrote that all rationalizations for prescriptions are post hoc; that is, the rules are taken to be true, and the justifications come afterward and really only serve to give the rule the illusion of validity:

Indeed all prescriptive arguments about correctness that depend on intra-linguistic factors are post-hoc rationalizations. . . . But an intra-linguistic rationalization is not the reason why some usages are believed to be wrong. The reason is that it is simply common sense: everybody knows it, it is part of the culture to know it, and you are an outsider if you think otherwise: you are not a participant in the common culture, and so your views can be dismissed. To this extent, linguists who state that I seen it is not ungrammatical are placing themselves outside the common culture.[1]

This may sound like a rather harsh description of prescriptivism, but I think there’s a lot of truth to it—especially the part about linguists unwittingly setting themselves outside of the culture. Linguists try to play the part of the boy who pointed out that the emperor has no clothes, but instead of breaking the illusion they are at best treated as suspect for not playing along. But the point linguists are trying to make isn’t that there’s no such thing as right or wrong in language (though there are some on the fringe who would make such claims)—they’re simply trying to point out that, quite frequently, the justifications are phony and attention to facts and evidence is mostly nonexistent. There are no real axioms or first principles from which prescriptive rules follow—at least, there don’t seem to be any that are consistently applied and followed to their logical conclusions. Instead the canon of prescriptions is a hodgepodge of style and usage opinions that have been passed down and are generally assumed to have the force of law. There are all kinds of unexamined assumptions packaged into prescriptions and their justifications, such as the following from Professor Jacoby:

  • Our society has a tendency to depersonalize people.
  • Depersonalizing people is bad.
  • Using that as a relative pronoun with human referents depersonalizes them.

There are probably more, but that covers the bases. Note that even if we agree that our society depersonalizes people and that this is a bad thing, it’s still quite a leap from this to the claim that that depersonalizes people. But, as Milroy argued, it’s not really about the justification. It’s about having a justification. You can go on until you’re blue in the face about the history of English relative pronoun usage (for instance, that demonstrative pronouns like that were the only option in Old English, and that this has changed several times over the last millennium and a half, and that it’s only recently that people have begun to claim that that with people is wrong) or about usage in other, related languages (such as German, which uses demonstrative pronouns as relative pronouns), but it won’t make any difference; at best, the person arguing for the rule will superficially soften their stance and make some bad analogies to fashion or ethics, saying that while it might not be a rule, it’s still a good guideline, especially for novices. After all, novices need rules that are more black and white—they need to use training wheels for a while before they can ride unaided. Too bad we also never stop to ask whether we’re actually providing novices with training wheels or just putting sticks in their spokes.

Meanwhile, prescriptivists frequently dismiss all evidence for one reason or another: It’s well established in the history of usage? Well, that just shows that people have always made mistakes. It’s even used by greats like Chaucer, Shakespeare, and other literary giants? Hey, even the greats make mistakes. Either that or they mastered the rules and thus know when it’s okay to break them. People today overwhelmingly break the rule? Well, that just shows how dire the situation is. You literally can’t win, because, as Geoffrey Pullum puts it, “nothing is relevant.”

So if most prescriptions are based on unexamined assumptions and post hoc rationalizations, where does that leave things? Do we throw it all out because it’s a charade? That seems rather extreme. There will always be rules, because that’s simply the nature of people. The question is, how do we establish which rules are valid, and how do we teach this to students and practice it as writers and editors? Honestly, I don’t know, but I know that it involves real research and a willingness to critically evaluate not only the rules but also the assumptions that underlie them. We have to stop having a knee-jerk reaction against linguistic methods and allow them to inform our understanding. And linguists need to learn that rules are not inherently bad. Indeed, as John Algeo put it, “The problem is not that some of us have prescribed (we have all done so and continue to do so in one way or another); the trouble is that some of us have prescribed such nonsense.”[2]

  1. [1] James Milroy, “Language Ideologies and the Consequences of Standardization,” Journal of Sociolinguistics 5, no. 4 (November 2001), 536.
  2. [2] “Linguistic Marys, Linguistic Marthas: The Scope of Language Study,” College English 31, no. 3 (December 1969): 276.