Arrant Pedantry

By

Language, Logic, and Correctness

In “Why Descriptivists Are Usage Liberals”, I said that there some logical problems with declaring something to be right or wrong based on evidence. A while back I explored this problem in a piece titled “What Makes It Right?” over on Visual Thesaurus.

The terms prescriptive and descriptive were borrowed from philosophy, where they are used to talk about ethics, and the tension between these two approaches is reflected in language debates today. The questions we have today about correct usage are essentially the same questions philosophers have been debating since the days of Socrates and Plato: what is right, and how do we know?

As I said on Visual Thesaurus, all attempts to answer these questions run into a fundamental logical problem: just because something is doesn’t mean it ought to be. Most people are uncomfortable with the idea of moral relativism and believe at some level that there must be some kind of objective truth. Unfortunately, it’s not entirely clear just where we find this truth or how objective it really is, but we at least operate under the convenient assumption that it exists.

But things get even murkier when we try to apply this same assumption to language. While we may feel safe saying that murder is wrong and would still be wrong even if a significant portion of the population committed murder, we can’t safely make similar arguments about language. Consider the word bird. In Old English, the form of English spoken from about 500 AD to about 1100 AD, the word was brid. Bird began as a dialectal variant that spread and eventually supplanted brid as the standard form by about 1600. Have we all been saying this word wrong for the last four hundred years or so? Is saying bird just as wrong as saying nuclear as nucular?

No, of course not. Even if it had been considered an error once upon a time, it’s not an error anymore. Its widespread use in Standard English has made it standard, while brid would now be considered an error (if someone were to actually use it). There is no objectively correct form of the word that exists independent of its use. That is, there is no platonic form of the language, no linguistic Good to which a grammarian-king can look for guidance in guarding the city.

This is why linguistics is at its core an empirical endeavor. Linguists concern themselves with investigating linguistic facts, not with making value judgements about what should be considered correct or incorrect. As I’ve said before, there are no first principles from which we can determine what’s right and wrong. Take, for example, the argument that you should use the nominative form of pronouns after a copula verb. Thus you should say It is I rather than It is me. But this argument assumes as prior the premise that copula verbs work this way and then deduces that anything that doesn’t work this way is wrong. Where would such a putative rule come from, and how do we know it’s valid?

Linguists often try to highlight the problems with such assumptions by pointing out, for example, that French requires an object pronoun after the copula (in French you say c’est moi [it’s me], not c’est je [it’s I]) or that English speakers, including renowned writers, have long used object forms in this position. That is, there is no reason to suppose that this rule has to exist, because there are clear counterexamples. But then, as I said before, some linguists leave the realm of strict logic and argue that if everyone says it’s me, then it must be correct.

Some people then counter by calling this argument fallacious, and strictly speaking, it is. Mededitor has called this the Jane Austen fallacy (if Jane Austen or some other notable past writer has done it, then it must be okay), and one commenter named Kevin S. has made similar arguments in the comments on Kory Stamper’s blog, Harmless Drudgery.

There, Kevin S. attacked Ms. Stamper for noting that using lay in place of lie dates at least to the days of Chaucer, that it is very common, and that it “hasn’t managed to destroy civilization yet.” These are all objective facts, yet Kevin S. must have assumed that Ms. Stamper was arguing that if it’s old and common, it must be correct. In fact, she acknowledged that it is nonstandard and didn’t try to argue that it wasn’t or shouldn’t be. But Kevin S. pointed out a few fallacies in the argument that he assumed that Ms. Stamper was making: an appeal to authority (if Chaucer did it, it must be okay), the “OED fallacy” (if it has been used that way in the past, it must be correct), and the naturalistic fallacy, which is deriving an ought from an is (lay for lie is common; therefore it ought to be acceptable).

And as much as I hate to say it, technically, Kevin S. is right. Even though he was responding to an argument that hadn’t been made, linguists and lexicographers do frequently make such arguments, and they are in fact fallacies. (I’m sure I’ve made such arguments myself.) Technically, any argument that something should be considered correct or incorrect isn’t a logical argument but a persuasive one. Again, this goes back to the basic difference between descriptivism and prescriptivism. We can make statements about the way English appears to work, but making statements about the way English should work or the way we think people should feel about it is another matter.

It’s not really clear what Kevin S.’s point was, though, because he seemed to be most bothered by Ms. Stamper’s supposed support of some sort of flabby linguistic relativism. But his own implied argument collapses in a heap of fallacies itself. Just as we can’t necessarily call something correct just because it occurred in history or because it’s widespread, we can’t necessarily call something incorrect just because someone invented a rule saying so.

I could invent a rule saying that you shouldn’t ever use the word sofa because we already have the perfectly good word couch, but you would probably roll your eyes and say that’s stupid because there’s nothing wrong with the word sofa. Yet we give heed to a whole bunch of similarly arbitrary rules invented two or three hundred years ago. Why? Technically, they’re no more valid or logically sound than my rule.

So if there really is such a thing as correctness in language, and if any argument about what should be considered correct or incorrect is technically a logical fallacy, then how can we arrive at any sort of understanding of, let alone agreement on, what’s correct?

This fundamental inability to argue logically about language is a serious problem, and it’s one that nobody has managed to solve or, in my opinion, ever will completely solve. This is why the war of the scriptivists rages on with no end in sight. We see the logical fallacies in our opponents’ arguments and the flawed assumptions underlying them, but we don’t acknowledge—or sometimes even see—the problems with our own. Even if we did, what could we do about them?

My best attempt at an answer is that both sides simply have to learn from each other. Language is a democracy, true, but, just like the American government, it is not a pure democracy. Some people—including editors, writers, English teachers, and usage commentators—have a disproportionate amount of influence. Their opinions carry more weight because people care what they think.

This may be inherently elitist, but it is not necessarily a bad thing. We naturally trust the opinions of those who know the most about a subject. If your car won’t start, you take it to a mechanic. If your tooth hurts, you go to the dentist. If your writing has problems, you ask an editor.

Granted, using lay for lie is not bad in the same sense that a dead starter motor or an abscessed tooth is bad: it’s a problem only in the sense that some judge it to be wrong. Using lay for lie is perfectly comprehensible, and it doesn’t violate some basic rule of English grammar such as word order. Furthermore, it won’t destroy the language. Just as we have pairs like lay and lie or sit and set, we used to have two words for hang, but nobody claims that we’ve lost a valuable distinction here by having one word for both transitive and intransitive uses.

Prescriptivists want you to know that people will judge you for your words (and—let’s be honest—usually they’re the ones doing the judging), and descriptivists want you to soften those judgements or even negate them by injecting them with a healthy dose of facts. That is, there are two potential fixes for the problem of using words or constructions that will cause people to judge you: stop using that word or construction, or get people to stop judging you and others for that use.

In reality, we all use both approaches, and, more importantly, we need both approaches. Even most dyed-in-the-wool prescriptivists will tell you that the rule banning split infinitives is bogus, and even most liberal descriptivists will acknowledge that if you want to be taken seriously, you need to use Standard English and avoid major errors. Problems occur when you take a completely one-sided approach, insisting either that something is an error even if almost everyone does it or that something isn’t an error even though almost everyone rejects it. In other words, good usage advice has to consider not only the facts of usage but speakers’ opinions about usage.

For instance, you can recognize that irregardless is a word, and you can even argue that there’s nothing technically wrong with it because nobody cares that the verbs bone and debone mean the same thing, but it would be irresponsible not to mention that the word is widely considered an error in educated speech and writing. Remember that words and constructions are not inherently correct or incorrect and that mere use does not necessarily make something correct; correctness is a judgement made by speakers of the language. This means that, paradoxically, something can be in widespread use even among educated speakers and can still be considered an error.

This also means that on some disputed items, there may never be anything approaching consensus. While the facts of usage may be indisputable, opinions may still be divided. Thus it’s not always easy or even possible to label something as simply correct or incorrect. Even if language is a democracy, there is no simple majority rule, no up and down vote to determine whether something is correct. Something may be only marginally acceptable or correct only in certain situations or according to certain people.

But as in a democracy, it is important for people to be informed before metaphorically casting their vote. Bryan Garner argues in his Modern American Usage that what people want in language advice is authority, and he’s certainly willing to give it to you. But I think what people really need is information. For example, you can state authoritatively that regardless of past or present usage, singular they is a grammatical error and always will be, but this is really an argument, not a statement of fact. And like all arguments, it should be supported with evidence. An argument based solely or primarily on one author’s opinion—or even on many people’s opinions—will always be a weaker argument than one that considers both facts and opinion.

This doesn’t mean that you have to accept every usage that’s supported by evidence, nor does it mean that all evidence is created equal. We’re all human, we all still have opinions, and sometimes those opinions are in defiance of facts. For example, between you and I may be common even in educated speech, but I will probably never accept it, let alone like it. But I should not pretend that my opinion is fact, that my arguments are logically foolproof, or that I have any special authority to declare it wrong. I think the linguist Thomas Pyles said it best:

Too many of us . . . would seem to believe in an ideal English language, God-given instead of shaped and molded by man, somewhere off in a sort of linguistic stratosphere—a language which nobody actually speaks or writes but toward whose ineffable standards all should aspire. Some of us, however, have in our worst moments suspected that writers of handbooks of so-called “standard English usage” really know no more about what the English language ought to be than those who use it effectively and sometimes beautifully. In truth, I long ago arrived at such a conclusion: frankly, I do not believe that anyone knows what the language ought to be. What most of the authors of handbooks do know is what they want English to be, which does not interest me in the least except as an indication of the love of some professors for absolute and final authority.1”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.

In usage, as in so many other things, you have to learn to live with uncertainty.

Notes   [ + ]

1. ”Linguistics and Pedagogy: The Need for Conciliation,” in Selected Essays on English Usage, ed. John Algeo (Gainesville: University Presses of Florida, 1979), 169–70.

By

Names, Spelling, and Style

A couple of weeks ago, I had a conversation with Mededitor on Twitter about name spelling and style. It started with a tweet from Grammar Girl linking to an old post of hers on whether you need a comma before “Jr.” She notes that most style guides now leave out the commas. Mededitor opined that the owners of the names, not editors, should get to decide whether or not to use commas. In this follow-up post, Grammar Girl seems to come to the same conclusion:

However, Chicago also states that writers should make a reasonable effort to spell a name the way a person spells it himself or herself, and I presume that also applies to punctuation. In other words, you’re free to insist on the comma before “Jr.” in your own name.

I can see the appeal in this argument, but I have to disagree. As I argued on Twitter and in a comment on that second post, catering to authors’ preferences for commas around “Jr.” creates inconsistency in the text. And it wouldn’t just be authors themselves that we’d have to cater to; what about people mentioned or cited in the text? Should editors spend time tracking down every Jr. or III whose names appear in writing to ask whether they prefer to have their suffixes set off with commas?

Doing so could take enormous amounts of time, and in the end there’s no benefit to the reader (and possibly a detriment in the form of distracting inconsistency), only to some authors’ egos. Further, we’d have to create a style anyway and apply it to all those who had no preference or whose preferences could not be identified. Why pick an arbitrary style for some names and not others? Either the preference matters or it doesn’t. And if it doesn’t matter, that’s what a style choice is for: to save us from wasting our time making countless minor decisions.

But I have a further reason for not wishing to defer to authors’ preferences. As I argued in that same comment, punctuation is not the same thing as spelling. There’s one right way to spell my name: Jonathon Owen. If you write my name Jonathan Owens, you’ve spelled it wrong. There’s no principled reason for spelling it one way or another; that’s just the way it is. But punctuation marks aren’t really part of someone’s name; they’re merely stylistic elements between or around the parts of people’s names to separate them, abbreviate them, or join them.

Punctuation around or in names, however, is often principled, though the principles of punctuation are prone to change over time. “Jr.” was traditionally set off by commas not because the commas were officially part of anyone’s name, but because it was considered parenthetic. As punctuation has become more streamlined, the requirement to set off this particular parenthetic with commas has been dropped by most style guides. And to be blunt, I think the desire of some authors to hang on to the commas is driven mostly by a desire to stick with whatever style they grew up with. It’s not much different from some people’s resistance to switching to one space between sentences.

In the course of the conversation with Mededitor, another point came up: periods after middle initials that don’t stand for anything. Some people insist that you shouldn’t use a period in those cases, because the period signals that the letter is an abbreviation, but The Chicago Manual of Style recommends using a period in all cases regardless. Again, it’s difficult for editors and proofreaders to check and enforce proper punctuation after an initial, and the result is a style that looks inconsistent to the readers. And again, individuals’ preferences are not always clear. Even one of the most famous individuals with only a middle initial, Harry S. Truman, wrote his name inconsistently, as the Harry S. Truman Library points out.

Yes, it’s true that editors can add a list of names to their style sheets to save some time, but checking every single name with an initial against a style sheet—and then looking them up if they’re not on the sheet—still takes time. And what’s the result? Names that occasionally look like they’re simply missing a period after the initial, because the reader will generally have no idea that there’s a reason behind the omission. The result is an error in most readers’ eyes, except for those few in the know.

The fundamental problem with making exceptions to general rules is that readers often has no idea that there are principled reasons behind the exceptions. If they see an apparent inconsistency and can’t quickly figure out a reason for it, then they’ve been needlessly distracted. Does the supposed good done by catering to some individuals’ preference for commas or periods around their names outweigh the harm done by presenting readers the appearance of sloppiness?

I don’t think it does, and this is why I agree with Chicago. I think it’s best—both for editors and for readers—to pick a rule and stick with it.

Update: Mededitor posted a response here, and I want to respond and clarify some points I made here. In that post he says, “I argue for the traditional rule, namely: ‘Make a reasonable attempt to accommodate the conventions by which people spell their own names.'” I want to make it clear that I’m also arguing for the traditional rule. I’m not saying that editors should not worry about the spelling of names. I simply disagree that commas and periods should be considered spelling.

With the exception of apostrophes and hyphens, punctuation is a matter of style, not spelling. The comma in Salt Lake City, Utah is not part of the spelling of the place name; it simply separates the two elements of the name, just as the now-deprecated comma before “Jr.” separates it from the given and family names. Note that the commas disappear if you use one element by itself, and other commas can appear in other contexts, such as when a name is inverted: “Jonathon Owen” becomes “Owen, Jonathon” in an index. This comma is also not part of the spelling of my name; it’s just a piece of punctuation. It’s a style choice.

And those style choices vary and change over time. In the UK, it’s standard practice to omit periods from abbreviations. Thus I’d be Jonathon R Owen in British style. The period in American style is not an element of my middle name that appears when it’s shortened—it’s a style choice that communicates something about my name. But the important thing is that it’s a choice. You can’t choose how to spell my name (though plenty of people have told me that I spell it wrong). But you can choose how to punctuate it to fit a given style.

%d bloggers like this: