Going Nucular Page 4
Middlebrow had its cultural moment just after World War II. In 1949, Harper’s editor Russell Lynes wrote an influential article called “Highbrow-Middlebrow-Lowbrow,” and a few years later, Dwight Macdonald wrote a famous polemic called “Masscult and Midcult,” a vituperative attack on middlebrow culture. That was all it took to set off a kind of national parlor game, as critics set about putting everything into its appropriate pigeonhole. Highbrow was Ezra Pound, the Berlin Philharmonic, and John Dewey; lowbrow was Mickey Spillane, Guy Lombardo, and Walter Winchell; middlebrow was Edna Ferber, Andre Kostelanetz, and Walter Lippmann.
Fresh Air commentary, November 17, 2001
The distinctions were crude and simplistic, the way they always are in these listing exercises that capture the public fancy every so often—U and non-U, camp and kitsch, modern and postmodern, wired and tired. But there were serious issues at stake. For Macdonald, middlebrow was the safe, smug enemy of great art—in his words, it was “the tide line where the decisive struggles for survival take place between higher and lower organisms.” That view appealed to many intellectuals on both the right and the left. They made common cause against the tide of middlebrow art, in America’s last great eructation of cultural snobbery.
Those were the echoes that Jonathan Franzen evoked when he demurred from appearing on Oprah for fear he might compromise his status as a writer in what he called “the high-art literary tradition.” Franzen later apologized, but the word “high” still stuck in some people’s craw. “High,” “middle,” “low”—those old hierarchies sound out of date these days. We may still be interested in distinctions of taste and quality, but we aren’t comfortable about arranging them vertically anymore. The British art critic Robert Hewison once said that when he was growing up, culture was organized like a pyramid, but that somewhere along the way it got tipped over on its side.
One reason for this is that patterns of cultural consumption aren’t as closely linked to class as they used to be. Back in 1953, the critic Clement Greenberg could write, “Middlebrow . . . is born . . . out of the desire of newly ascendant social classes to rise culturally.” That might have been true in an age when people were proudly lining their shelves with the latest selections of the Britannica great books or the Literary Guild. But the people who tune in to Oprah’s book club seem to be more interested in personal growth than social advancement. Nowadays, after all, literary discernment doesn’t give you much of a leg up socially. There’s a passage in Martin Amis’s novel The Information where one of the characters observes how literary taste degrades as you walk up the aisle of an airplane. In economy people are reading Middlemarch, in business they’re reading John Grisham, and in first they’re just sleeping and eating caviar.
Then too, nowadays we all do our cultural shopping from the same outlets. The New Yorker writer John Seabrook argued in a recent book that the old distinctions of high, middle, and low have been superseded by a new amalgam that he calls Nobrow. Nobrow is the creation of the high-powered cultural marketing that gives us blockbuster museum shows, music megastores, and crossover bestsellers. You can hear that in the disappearance of that condescending phrase “mass culture” that mid-twentieth-century critics used to rail about—now it’s all “popular culture,” which suggests an event with festival seating.
The new cultural scene doesn’t lend itself very well to those old categories of brow. You could still appeal to the highbrow-middlebrow distinction if you’re talking about the difference between Eliot Carter and the Three Tenors. But how do you sort out the highbrows and the middlebrows in the world of pop, which is where American musical culture is really enacted now? Beck versus Billy Joel? Björk versus Sarah Brightman? What’s left of the notion of highbrow art, when there’s a Norman Rockwell show on exhibit right now at the Guggenheim? And however Franzen may think of himself, he isn’t really in a highbrow literary tradition, no more than novelists like Dave Eggars or Michael Chabon. If you want undiluted highbrow nowadays, you have to send abroad for it. W. G. Sebald, Milan Kundera, Umberto Eco—those are the writers who can still make you feel you should have paid more attention in eleventh grade.
You still hear the word middlebrow from time to time, but it sounds increasingly irrelevant and desperate. There was an article in the Wall Street Journal not long ago dismissing Christiane Amanpour as a middlebrow, which begged the question of what a highbrow war correspondent would sound like—would allusions to Thucydides help?
In fact those mid-century attacks on the middlebrow sound embarrassing now. The fulminations about the tradition of high art, the horror of middle-class vulgarity, the fixation on distinguishing between the great and the merely near-great—in retrospect, it all smacks of the same humorless piety and cultural insecurity that critics were assailing in the self-improving middle classes. When you come down to it, middlebrow was always a pretty middlebrow idea.
Prurient Interests
A couple of years ago I had a call from a lawyer working with a local public defender’s office. His client had been arrested when he was stopped on the street late at night with a length of chain attached to a padlock in his pocket. The arrest was pursuant to an old section of the penal code that makes it a felony to carry any of a long list of weapons including a “slungshot,” an archaic term that one 1951 dictionary defined as “a weapon used chiefly by criminals consisting of a weight attached to a flexible handle or strap.”
The attorney wanted my help in filing a motion arguing that the statute clashes with a basic principle of interpretation: The law ought to be written in words that give what lawyers call “fair and reasonable notice of the conduct prohibited.” That seemed fair enough to me; when you’re telling people what they can and can’t do, you ought to use language they can be expected to understand. Of course you could say that the wording of the penal code hardly matters in a case like this: People don’t ordinarily consult a statute book before they go out at night with a chain and lock in their pocket. But by that line of argument we may as well go back to writing statutes in Latin. (I know several people in the classics department who would welcome the work.)
Fresh Air Commentary, May 28, 2002
Obscure legal language can sometimes have a much broader effect than in that slungshot case. Take the word prurient. In a famous 1973 decision, the Supreme Court held that the standard for judging obscenity is “whether, to the average person, applying contemporary community standards, the dominant theme of the material . . . appeals to prurient interest.” With minor variations, that formula has been widely used ever since then. It’s an odd way to put things—asking the average person to judge whether something “appeals to prurient interest” when the average person probably doesn’t know the word prurient in the first place. I have an image of Larry Flynt stopping passersby to ask their opinion of the latest number of Hustler: “What do you think? Not too prurient, is it?”
It’s true that prurient is far from being an obsolete word like slungshot. But even people who know the word often seem to have no clear idea of its meaning. Prurient is originally from the Latin root for “itch,” and modern dictionaries define it in terms of an “unusual” or “unhealthy” interest in sex. So people have prurient minds when they have an unhealthy interest in sex, and things are prurient when they arouse that sort of interest. But you find a lot of people using prurient just as a vague synonym for “lewd” or “erotic.” At the 2 Live Crew obscenity trial in St. Petersburg, Florida, in 1990, the prosecutor charged that the rap group had “incited the crowd to prurient behavior.” And some years ago, Massachusetts Governor Edward King claimed that a new pornography law would protect children from “perverted persons who would coerce them into committing prurient acts.”
Those people are plainly Unclear on the Concept. Acts and behavior can’t be prurient in and of themselves, not if you use the word correctly. But then it’s unlikely that any of these people ever looked the word up—they just guessed at its meaning on the basis of having seen it in
one context, that Supreme Court definition of obscenity. In fact, that single clause of the Court’s definition accounts for more than half the occurrences of the word in the press. If not for that decision, prurient would probably be as rare a word as concupiscent or nugatory.
The problem with building law around obscure words like prurient isn’t just that it fails to give fair and reasonable notice. The fact is that it’s hard for anybody to say exactly what a word like prurient means today, not excluding lexicographers. Lawyers tend to think that learned words like prurient are somehow more precise than everyday items like lustful or dirty. Actually it’s the opposite. That fuzziness about the meaning of prurient is typical of words that live in the margins of the language—people don’t encounter them often enough to get a clear idea of what they mean. The more closely an expression is associated with a unique situation, like the Supreme Court’s obscenity definition, the harder it is to pry out its general meaning. Take caisson, madding, and petard. Everybody’s heard them in a single famous setting, but how many people can tell you with confidence what all of them mean?
“I know it when I see it.” That’s how Justice Potter Stewart responded in 1964 when he was asked to define obscenity. And in fact that’s pretty much what the Court wound up saying when it slipped that obscure word prurient into its decision—it left prosecutors free to define obscenity however they liked. In the end, the Court’s definition would have been more precise and consistent with the standards of real communities if it had defined obscenity by saying that it was a question of whether the work in question appealed to people with dirty minds. But then they wouldn’t have sounded like judges.
War Drums
When Words Fail
Twice during the days following the terrorist attacks I listened on TV as witnesses to the World Trade Center calamity broke down, unable to continue their accounts. On both occasions the interviewers waited during a moment of awkward silence, then finished the sentences for the witnesses.
We feel conflicting urges at a moment like this. On the one hand, we hold that there are times when words ought to fail us, that there are things so horrible that silence is the only language for them. “Indescribable,” “unutterable,” “unspeakable”—those were the words that kept coming to mind as we struggled to comprehend what had happened.
But we share those reporters’ discomfort with dead air, too. It may be that language can’t do justice to the horror of experience, but it’s the only game in town. So we all sat rapt as the networks kept running the same awful video clips under a babble of wan descriptions. “Shocking,” “horrific,” “terrible,” “like a battlefield”—as if the repetition would eventually render the reality as familiar and banal as the language itself.
Language seemed to fail us, too, as a vehicle for expressing our sense of outrage. The popular press had it relatively easy—the San Francisco Examiner’s front page the day after the attack showed a color picture of the World Trade Center explosion under the one-word screamer “Bastards!,” which was something we all needed to get off our chests. But that approach wasn’t an option for those to whom the public was looking for a more considered judgment. The official condemnations sounded oddly stilted. Both Gray Davis and Charles Schumer called the attacks “dastardly,” a word that tends to bring to mind a mustachioed Gilbert and Sullivan villain, not a crazed zealot. It occurred to me that they might have seized on the word because of its sound associations, but in that case I preferred the Examiner’s version.
Los Angeles Times, September 16, 2001
But other officials took the same anachronistic tone. President Bush called the attacks “despicable,” which has a primly Victorian sound to it. A TV commentator described the acts as “nefarious,” another Gilbert and Sullivan word. And numerous people used “infamy,” which already sounded old-fashioned when President Franklin D. Roosevelt used it in describing the Pearl Harbor attack back in 1941.
You could hear that Victorian note, as well, in the condemnations of the hijackers as “craven” and as “faceless cowards,” as if the most damning thing you could say about them is that they behaved dishonorably. Surprise attacks on unarmed civilians are repugnant by any moral standard. But “cowardly” doesn’t explain that suicidal fanaticism—indeed you wish that some of the hijackers had chosen to chicken out when the time came to throw their lives away.
It was all strikingly different from the language we use to condemn other sorts of murderous outrages. The Unabomber was demented and the Columbine killings were senseless; nobody would have thought of describing either as infamous or dastardly.
True, everyday words might seem insufficient to describe an experience of this magnitude, at least for people and publications who are speaking for the historical record. Even “tragedy” felt too slight, vitiated by years of tabloid overexposure. But the contemporary language hasn’t wholly lost its moral bearings. We still have resources that are up to rendering the enormity of the attacks, as well as words can ever hope to do: ghastly, monstrous, or enormity itself.
In the wake of the attacks, though, official America needed something else: language that would reassert control of a world that had gotten terrifyingly out of hand. Victorian indignation is ideal for that purpose—it evokes the moral certainties of a simpler age, when the line between civilization and barbarism was clearly drawn, and powerful nations brooked neither insult nor injury from lesser breeds without the law. This may be the first war of the twenty-first century, as President Bush has said. But its rhetoric has its roots in the nineteenth.
A Name Too Far
Recently, the White House was forced to apologize for the President’s description of the campaign against terrorism as a “crusade,” when it was pointed out that the word still evokes unpleasant historical memories among Muslim nations. Then the Administration blundered again when it dubbed the campaign Operation Infinite Justice, a name that seemed to some Muslims to promise what only Allah could deliver. The Pentagon quickly redesignated the buildup Operation Enduring Freedom, a name that manages to be both grandiose and dangerously ambiguous—you can be sure that some parties will see an interest in translating it so that freedom comes out as something that has to be endured.
That wasn’t a problem that anyone had to worry about when the American military first started to give names to operations during World War II. Operations back then bore nondescript names like Avalanche, Market Garden, Mulberry, and of course Overlord, the name personally selected by Winston Churchill for the Normandy invasion. That name may have conveyed “a sense of majesty and patriarchal vengeance,” as the historian David Kahn put it, but it was singularly uninformative about the mission. In fact Churchill himself urged that names be carefully chosen so as not to suggest the character of the operation, particularly after British intelligence intercepted references to a German operation called Sealion and guessed that it was a plan to invade Britain.
San Jose Mercury News, September 30, 2001
The Allied operation names were kept strictly secret, to the point where even an inadvertent mention could trigger an alarm. A few weeks before D-Day, the names Utah, Omaha, and Overlord showed up as answers in the London Daily Telegraph’s crossword puzzle. Officers from MI5 rushed to Surrey to interview the schoolteacher who had composed the puzzle, but the whole business turned out to be a bizarre coincidence.
It wasn’t until after the war that names like Overlord and Avalanche became household words, not to mention the model for the names of hundreds of movies, from Operation Pacific to Operation Petticoat. At that point the War Department realized there could be an advantage in creating a new category of unclassified operation nicknames for public-relations purposes. Even so, most of the postwar names were no more descriptive than the secret code names of World War II. President Eisenhower sent the Marines to Lebanon in 1957 under the name Operation Blue Bat, and the military operations in Vietnam tended to have names like End-Sweep, Pocket Money, and Abilene.
True,
generals occasionally picked operation names that had more martial connotations, but that could backfire. When General Ridgeway named one Korea operation Killer, the State Department complained that he had soured the ongoing negotiations with the Chinese. Fifteen years later in Vietnam, General Westmoreland was forced to rename Operation Masher when President Johnson objected that the name didn’t reflect the administration’s “pacification emphasis.” And the press came down on the Reagan administration when it dubbed the invasion of Grenada Operation Urgent Fury, which seemed an excessively bellicose title for a mission to rescue some medical students on a Caribbean island with a police force smaller than the San Jose Police Department.
The unhappy experience with the name Urgent Fury brought home just how important an operation name could be in determining the public perception of a military action. By the late 1980s, the administration was choosing its operation names with the media in mind. When the U.S. sent troops to Panama in 1989, the Bush administration named the operation Just Cause. The name irked some critics who had reservations about the legitimacy of the invasion—The New York Times ran an editorial on the name entitled “Operation High Hokum.” But a number of news anchors picked up on the phrase “just cause” to describe the invasion, which encouraged the Bush and Clinton administrations to make a policy of using tendentious names for their military actions.
Operation Just Cause was followed by Operations Desert Shield and Desert Storm, the first time the word “operation” was swollen to apply to a full-blown war. Those were followed in quick succession by Restore Hope in Somalia, Uphold Democracy in Haiti, and operations in the Balkans that went by names like Shining Hope, Determined Force, and Provide Promise. (“Provide” is a favorite element in these names—since 1989, we have had operations called Provide Promise, Provide Refuge, Provide Hope, Provide Transition, Provide Comfort, and Provide Relief.)