Going Nucular Page 3
Instead of European American, people use Caucasian, which seems to invest European descent with an objective scientific standing. That’s specious, of course—if African American is a racial category masquerading as a cultural one, Caucasian is a cultural category in racial drag. But it can be a useful word to have around when you want to make a racial contrast without risking a charge of vulgar bias. When it was argued that the ballots used in some California counties discriminated against minority voters, Robert Novak asked on CNN, “Does that mean that the minority groups are not as able to use these ballots as the Caucasians?” The remark would have sounded a lot more confrontational if he had put it in terms of “whites.”
The scientific-sounding cachet of Caucasian made the word the natural choice for the name of a high school club. To white adolescents in the California suburbs, the old ethnic identifications are remote and attenuated; that California high school student who wanted to start a Caucasian club described her own ancestry as American Indian, Hispanic, Dutch, German, Italian, and Irish—by that point, you’re only talking about vague family lore. And mere whiteness is apt to strike the adolescents as boring—as the student put it on CNN, “Well, you ask the kids, what are you? They’ll say white, but white, that’s not a race.”
To be sure, that explanation gets it backwards—if any of these is a racial category, it’s white, not Caucasian. But confusion is endemic in the American language of race. We’re always struggling to find racial labels that answer the question “what are you” with even-handed essences, but the labels keep catching their sleeves on disparities in the way we think about race itself. Racial classifications are like irregular verbs—they may be inconsistent, but they run too deep to be eliminated by decree.
Near Myths
I was struck by the difference in the words that the Bushes përe and fils used in their tributes to Ted Williams. The elder Bush called Williams “a great hero,” whereas his son used the phrase “a baseball legend.” Of course it’s understandable that the two would think of Williams differently. Williams was a personal hero to Bush senior, himself a talented ballplayer in New England and a wartime Navy pilot. But you wouldn’t expect Williams’s name to have had the same resonance for Bush II, who was a Texas schoolboy when Williams was finishing his career, and whose relationship to both baseball and combat has been exclusively managerial.
But there’s a generational difference between those words, too. In the press tributes to Williams, legend outnumbered hero by better than five to one—and when the press did call Williams a hero, the stories generally added something about his service as a Marine pilot, as if his baseball achievements alone didn’t entirely justify the label. Of course, some of that reflects a post—September 11 self-consciousness about using the word hero. But legend was nudging hero aside well before then. If you look at the way the press described players like Babe Ruth and Lou Gehrig between 1980 and 2000, you find that the use of hero declined by 50 percent, while the use of legend doubled.
Fresh Air Commentary, July 31, 2002
That isn’t to say that we’ve entirely left off expecting sports stars to be heroic—at least we seem to hold Barry Bonds accountable for imperfections of character that we’re willing to overlook in Sean Penn or Mick Jagger. But modern fans are much too hip and too knowing to put up with the hero-worshipping panegyric of pre-World War II sportswriters like Grantland Rice. People are more comfortable with the flip, self-referential banter of the talk shows on ESPN and Fox Sports Network, where the operant slogan seems to be “We are not impressed.”
There’s a sign of that shift in the disappearance of those heroic titles that the press used to bestow on players. I’m not thinking of simple nicknames like Dizzy, Babe, or Yogi—there are still plenty of those around. But the modern media don’t go in much for Homeric epithets like the Sultan of Swat, the Splendid Splinter, or the Yankee Clipper—and when they do, the titles usually have a postmodern edge to them. It’s hard to imagine Grantland Rice immortalizing any of the 1927 Yankees with a label like the Big Unit.
For that matter, Grantland Rice would never have described any player as a legend, either, if only because back when he was writing, the word could only refer to a story from popular folklore, not the person who inspired it. The new meaning of the word originated with the phrase “a legend in one’s own time,” which was first used by Lytton Strachey to describe Florence Nightingale. But it wasn’t until the 1970s or so that people began to use legend all by itself to refer to someone whose celebrity was especially long-lived.
That shift from hero to legend is the media’s backhand way of celebrating their own power—the measure of someone’s greatness now is not so much what he did as how long people kept talking about him. There can be unsung heroes, after all, but there are no unsung legends. And in fact the modern use of legend stands the traditional meaning of the word on its head. We never use the word to refer to someone whose fame is rooted in a genuine oral tradition—we don’t talk about “aeronautical legend Icarus” or “transportation legend Casey Jones.” On the contrary, the people we describe as legends now are the furthest thing from legendary in the literal sense of the word—they’re people who have been the focus of media attention throughout their careers, the way Williams was. It’s the media’s way of investing their own creations with folkloric status.
It’s true that there is a genuinely legendary aspect to Ted Williams’s fame. At least it’s certain that people would still be talking about him even if there had been no newspapers, radio, or TV around to document his accomplishments—if he’d played in the early era of the game, or if he’d been born with the wrong skin color to play major-league ball, like the literally legendary greats of the Negro Leagues. But legend has a leveling effect—it makes no distinction between people whose deeds are inherently memorable and celebrities who are pure media creations. Nose around in the press and you’ll run into references to television legend Ed McMahon, entertainment legend Charo, modeling legend Twiggy, and pop legend Leo Sayer. It seems unfair to use the same label for Ted Williams and Leo Sayer—after all, the one had 2,654 career hits, and the other only had about two.
That semantic deflation is inevitable when we make celebrity the measure of achievement, particularly when celebrity is a commodity that’s so easy to coin. In an age when everybody is famous for fifteen minutes, a legend is someone who has been in the limelight for half an hour. When the Yankees finished a spring training facility in Tampa, Florida, a couple of years ago the team christened it Legends Field. If the Yankees were renaming it now I expect they’d call it Icon Field or Avatar Alley. In fact they have my permission to rename it Heroes Field, just as soon as someone on the roster hits for a .400 season.
Lamenting Some Enforced Chastity
These are hard times for chastity. As Pope John Paul II pointed out in his remarks recently: “The life of chastity . . . confutes the conventional wisdom of the world.” And Eugene Clark, the rector of Saint Patrick’s Cathedral in New York City, pointed to the difficulties that priests had in maintaining their vows in a “sex-saturated” society, where Americans are bombarded by images of “liberated sex all day long.”
But chastity was problematic for both the Church and society at large well before Hugh Hefner and Larry Flynt filed their first business plans. You can see that in the declining use of the words chaste and chastity themselves. In modern times those words tend to be used chiefly in a metaphorical way—you see a lot more references to chaste architecture or a chaste prose style than to chaste men and women.
The fact is that we moderns are uncomfortable about using words that associate sexual continence with spiritual purity. We’ve lost sight of the connection that used to be implicit in words like chasten and chastise, which originally had the sense of “make chaste,” or “purify.” (For that matter, the word castrate comes from the same Latin root—it’s just a more draconian way of getting at the same end.)
Fresh Air Commentary, May 2, 2
002
Not surprisingly, the eclipse of chastity has blurred the original meaning of the word. It’s true that chastity has always involved abstaining from illicit sex. But chastity wasn’t the same thing as virginity: You could become chaste even if you had already had sexual experience. As Saint Augustine put it in a famous prayer, “Lord, give me chastity and continence, but not now.” For that matter, chastity didn’t necessarily rule out sex within marriage, so long as it was free of prurience or concupiscence. “Moor, she was chaste”—that’s how Aemilia tells Othello that his wife, Desdemona, was innocent of the infidelities that he had imagined.
By the eighteenth century, though, chastity was regarded as a minor virtue, and one associated chiefly with women, as the secular double standard came into its own. Samuel Johnson wrote of one vain, insipid country wife that she had no virtue but chastity. And over the last hundred years, people have pretty much bailed out on using the word chaste to describe sexual continence—instead they’ve appropriated the word celibate, which originally meant only unmarried.
But some people have been trying to revive the word chastity. On the Web, it comes up a lot in the sites for organizations promoting sexual abstinence, a movement that’s on a roll right now. Recently, for example, a House committee authorized an additional $50 million for “abstinence only” sex-education, this in addition to the half-billion dollars in state and federal funds that the programs have already received. The programs encourage teens to swear off sex until marriage and provide no information about birth control, abortion, or gay and lesbian sex, on the grounds that such information might put ideas into adolescents’ heads. The sites of the abstinence-only groups warn adolescents about the dangers of condoms and offer them suggestions as to how to restrain their sexual urges—one provides helpful links to the Amazon. com pages where they can order Scrabble and Trivial Pursuit.
But trying to resuscitate an unfashionable word is like trying to revive an old folk dance or costume—people invariably get the details wrong. The abstinence-only movement tends to talk about chastity as if it were merely the equivalent of virginity. The movement asks adolescents to sign vows pledging to remain “chaste until marriage”—there’s no sense that chastity might be a state that you could maintain even after you’ve entered a committed sexual relationship, the way Desdemona did. And they often talk about the loss of chastity as an irrevocable step. As one group puts it: “Chastity is a lifestyle. One date may be too late.” Saint Augustine would have cut teens more slack than that.
As it happens, though, the abstinence-only organizations aren’t the only ones who are contributing to the comeback of the word chastity. There’s an odd mirror of their preoccupations in the Web sites put up by people who are into chastity as a source of sexual stimulation, by means of devices that prevent any kind of sexual activity until the wearer is released by the keyholder. For women, the sites offer new variations on the chastity belt (which by the way was actually invented during the Italian Renaissance, not the Crusades, and which was probably very rarely used until its rediscovery by modern fetishists). For men there are a variety of cuffs, sheaths, and cages that achieve an analogous effect by what appear to be calculatedly uncomfortable means. Curiously, enthusiasts use the word chastisement to describe the process of putting your associate into one of these contrivances. It isn’t the normal use of the word, but it does have etymology on its side.
Given their druthers, I expect most people would prefer to curb their sexual urges with a brisk game of Scrabble. Still, the chastisement sites do capture something of the old sense of chastity as an austere spiritual practice. And unlike the abstinence-only movement, they share something else of the Roman Catholic Church’s view of chastity: They don’t pretend everyone has the vocation for it.
Stolen Words
The striking thing about plagiarism is how rarely anybody has anything original to say about it. Including that, let me hasten to add. Or at least that’s pretty much how it seems as you look back over the history of literary scandals—the indignant accusations, the protestations of innocent error, and above all the puzzling gratuitousness of the crime. I’m not talking about a student who goes on the Internet to buy a term paper for a course he hasn’t attended all semester. That may be reprehensible, but it isn’t mysterious. But why do competent and successful writers stoop to copying unattributed passages from other published works—often works that are well enough known so that detection is pretty likely? And why are the passages they steal so often banal and unnecessary?
The answer’s different for different writers. Why did Samuel Taylor Coleridge appropriate numerous passages of German philosophy in his Biographia Literaria, most of them digressions that add very little to the work? Biographers have suggested that Coleridge did it because he was blocked, or depressed, or had a self-destructive impulse. But you could hardly claim that Stephen Ambrose suffered from writer’s block, and there’s no evidence that he’s self-destructive. In part his plagiarism seemed to be simply a sign of arrogance, a sense that his vast popular audience would neither know nor care if he stole the words of some lesser-known academic historian.
Fresh Air Commentary, March 3, 2002
But in Ambrose’s case, the deficiency was as much aesthetic as moral. Take these sentences from Ambrose’s best-seller The Wild Blue, one of many that came almost verbatim from a book by the historian Thomas Childers: “Up, up, up he went, until he got above the clouds . . . B—24’s, glittering like mica, were popping up out of the clouds . . . .” You wonder why an author would want to pass that bit off as his own—not just purple writing, but somebody else’s shade of purple. It’s the sign of a writer who’s deaf to his own voice��and in fact, of a writer who doesn’t really care whether he has a voice at all.
In Doris Kearns Goodwin’s case, though, the plagiarisms are more puzzling, not just because she’s a better writer than Ambrose, but because the passages she stole are so pedestrian. Here’s one of a number of sentences from Goodwin’s 1987 book The Fitzgeralds and the Kennedys that were lifted more-or-less verbatim from Lynne McTaggart’s 1983 biography of Kathleen Kennedy: “Hardly a day passed without a newspaper photograph of little Teddy taking a snapshot with his camera held upside down, or the five Kennedy children lined up on a train or bus.”
Why would anybody bother to steal such an ordinary sentence? Goodwin’s explanation was that the borrowing wasn’t intentional—she said she had taken notes in longhand when she was preparing the book, then lost track of which bits she had written herself. That appeal to muddled note-taking is a familiar motif in these affairs. It’s the same explanation Alex Haley gave in 1976 when it turned out that his book Roots included numerous passages from Harold Courlander’s novel The African. And Coleridge’s nephew Henry offered the same defense for his uncle’s literary derelictions—he explained that Coleridge was a very sloppy note-taker, who mixed his own thoughts with “the thoughts of others, which he later failed to recognize as such.”
But this sort of explanation is hard to credit, particularly in Goodwin’s case. Suppose you were taking notes in longhand from a biography and you ran across a sentence like “Hardly a day passed without a newspaper photograph of little Teddy taking a snapshot with his camera held upside down, or the five Kennedy children lined up on a train or bus.” You wouldn’t write that down word-for-word; you’d put down something like “p. 25 Frequent pix of Ted w/ upside-down Brownie.” And it strains credulity to imagine that Goodwin could have written down forty or fifty such sentences verbatim, then forgotten that they weren’t her own.
But then, why did Goodwin do it? Actually, my own suspicion is that it may very well have been inadvertent, at least on her part—I’ll bet that one of her research assistants copied some of the sentences from McTaggart’s book in the course of summarizing it, and that Goodwin just dumped the assistant’s summaries into her book without checking them against the source. And if she didn’t ’fess up when the plagiarism was discovered, it mig
ht be because she didn’t want to admit to not having read the sources herself, or what’s worse, to having cribbed her prose from a research assistant.
Whatever the truth is, though, we can’t let Goodwin off the hook. It may be that authors like Goodwin and Ambrose have become less like writers and more like managers coordinating the activity of their staffs. But as other recent events have reminded us, CEOs are still responsible for everything that goes on on their watch. The American Historical Association dropped the clause in its statement on plagiarism that said that there had to be an intent to deceive—as an official of the association explained, “It’s plagiarism whether you intended to do so or not.” And we haven’t yet gotten to the point where we’ll allow our historians to claim credit for the words of paid researchers or ghost-writers or speechwriters, the way we allow Jack Welch or George W. Bush to do. In the end, after all, that’s all an author is, somebody whose words we pay to read.
Beating Their Brows
Highbrow and lowbrow were coined at the end of the nineteenth century as nods to the popular belief that physiognomy was a sure guide to intellectual capacity. The words themselves were lowbrow inventions, which novelists tended to put into the mouths of roughnecks or uneducated characters—Sinclair Lewis’s Babbitt describes a dinner as “a real sure-enough highbrow affair.”
But middlebrow had a more genteel parentage when it was coined a generation later, as the elite’s way of disparaging the tastes of middle-class consumers of culture, in their earnest efforts at self-betterment. As a 1925 article in Punch put it, middlebrows were “people who are hoping that someday they will get used to the stuff they ought to like.”