Future Babble - Part 5
Library

Part 5

They are, in this book's terms, hedgehogs. Their kind dominates the op-ed pages of newspapers, pundit panels, lecture circuits, and best-seller lists.

Now, if this is true, and if it's also true that the predictions of hedgehogs are even less accurate than those of the average expert-who does as well as a flipped coin, remember-then a disturbing conclusion should follow. The experts who dominate the media won't be the most accurate. In fact, they will be the least accurate. And that is precisely what Philip Tetlock discovered. Using Google hits as a simple way to measure the fame of each of his 284 experts, Tetlock found that the more famous the expert, the worse he did.

That result may seem more than a little bizarre. Predictions are a big part of what media experts do, after all. Surely experts who consistently deliver lousy results will be weeded out, while the expert who does better than average will be rewarded with a spot on the talking-head shows and all that goes with it. The cream should rise, and yet, it doesn't. In the world of expert predictions, the cream sinks. What rises is the stuff that should be, but isn't, skimmed off and thrown away.

How is this possible? Very simply, it's what people want. Or to put it in economic terms, it's supply and demand: We demand it; they supply.

INTRODUCING THE RENOWNED PROFESSOR DR. MYRON L. FOX.

For as long as students have evaluated professors, professors have complained about student evaluations. They aren't based on substance, professors say. A charming, funny, and confident teacher will be rated highly whether the students actually learn or not, while a serious and challenging professor who isn't so charming, funny, and confident will suffer. This complaint isn't unique to professors, of course. It can be heard anywhere people are rated by others, but professors have a unique way of fighting back.

In the early 1970s, three psychologists from the University of Southern California devised a simple but brilliant experiment. At the center of it would be a distinguished professor named Dr. Myron L. Fox.

Dr. Fox didn't exist. The psychologists invented him, complete with a suitably impressive resume. To play the role of Dr. Fox, they hired an actor who fit the popular image of a distinguished professor. The researchers then crafted an hour-long lecture on "mathematical game theory as applied to physician education" that was nonsense, and they coached the actor "to present his topic and conduct his question-and-answer period with an excessive use of double talk, neologisms, non sequiturs, and contradictory statements. All this was to be interspersed with parenthetical humor and meaningless references to unrelated topics." If substance were the basis of how people judge experts, Dr. Fox would be humiliated.

But Dr. Fox didn't just look the part of a distinguished professor. He talked it. He spoke with clarity, confidence, and authority. That was all that mattered. When Dr. Fox delivered his lecture at a teacher-training conference-before an audience consisting of psychiatrists, psychologists, and social-worker educators-the evaluations were positive across the board. Dr. Fox even got a perfect 100 percent "yes" on the question "Did he stimulate your thinking?" The researchers then showed a videotape of the lecture to a similar group and got more enthusiastic responses. A third showing of the videotape-this time to a group of educators and administrators enrolled in a graduate-level university cla.s.s-garnered still more praise. "Considering the educational sophistication of the subjects," the psychologists concluded, "it is striking that none of them detected the lecture for what it was."

Clearly, people were impressed by Dr. Fox. Just as clearly, it wasn't his substance that impressed them. So what was it exactly? Was it the aura of "expertness" emanating from his t.i.tle and his authoritative speaking style? Or was it simply the enthusiasm and confidence he projected? It's impossible to tell from the experiment itself, but the answer is likely both.

As social animals, we are exquisitely sensitive to status. An expert, in the appropriate circ.u.mstances, has considerable status. We respect that, even defer to it, whether consciously or not, sometimes with bizarre results. In a Texas study, researchers had a man at a street corner cross against a traffic light and watched to see how many other people waiting at the corner would follow the man's lead and cross the street. The critical variable? Sometimes the man wore ordinary street clothes. Other times, he wore a sharp business suit and tie. As the saying goes, the clothes make the man: Three and a half times more people followed the man across the street when he wore the business suit. An Australian experiment produced even odder results, thanks to the well-doc.u.mented connection between status and perceived size. In a series of university cla.s.srooms, a man was introduced to the students as a visitor from Cambridge University. In some cla.s.ses, the man was said to be a student at Cambridge. In others, he was a lecturer, or a senior lecturer, or a professor. Afterward, students were asked to estimate the visitor's height: With each step up the ladder of status, the man grew by half an inch.

The power of authority was most famously demonstrated in Stanley Milgram's cla.s.sic experiment involving electric shocks administered to a supposed test subject by a volunteer. The shocks weren't real; the "test subject" was actually an actor. The real subject was the volunteer who flipped switches under the supervision of a white-coated scientist. As the shocks got to supposedly dangerous and even deadly levels, volunteers got anxious and upset. They sweated and moaned. They begged the scientist to stop. But very few refused to do as directed when the scientist told them to throw the switch. A less famous experiment that is perhaps even more disturbing started with a phone call to twenty-two nurses' stations in various hospitals and wards. The man on the phone identified himself as a physician and told the nurses to administer a large dose of a certain drug to a patient. The nurses had plenty of reason to refuse. They didn't know this supposed doctor; it was against hospital policy for doctors to direct treatments over the phone; the drug prescribed by the "doctor" hadn't been cleared for use; and the label of the drug clearly stated that the maximum daily dose was half what the "doctor" had ordered them to inject in the patient. Despite all this, 95 percent of the nurses got the drug and were on their way to the patient's room when the researchers put a stop to the experiment. In effect, the nurses stopped thinking independently the moment they heard the t.i.tle "Dr."

"Con artists," notes psychologist Robert Cialdini, "drape themselves with the t.i.tles, clothes, and trappings of authority. They love nothing more than to emerge elegantly dressed from a fine automobile and to introduce themselves to their prospective 'mark' as Doctor or Judge or Professor or Commissioner Someone. They understand that when they are so equipped, their chances for compliance are greatly increased." Experts, TV producers, newspaper editors, and book publishers act in better faith than con men, one hopes. But they, too, intuitively understand that the trappings of authority make people much more likely to find the expert persuasive. It's why pundits on business shows wear expensive business suits. It's why "Ph.D." so often appears alongside the author's name on the covers of books, and why an economist from Harvard or Oxford or some other prestigious university is far more likely to have her university affiliation mentioned in an introduction than an economist from a lesser inst.i.tution. It's why the CNBC host in the video clip I described at the start of this chapter introduced Arthur Laffer as "chief investment officer of Laffer Investments and former economic adviser to President Reagan." There may be no rational reason to introduce Laffer by citing a job he held decades earlier, but that's not the point. Much like the phrase Harvard economist and a con man's gold watch, it establishes authority at a level deeper than rationality. The audience may not consciously think, "He was an adviser to the president of the United States! He must be right!" But nudging us toward that conclusion is certainly the effect.

THE CONFIDENCE GAME.

In much the same way, a strong, enthusiastic, confident speaking style has a power that transcends mere rationality-a power the psychologist Stephen Ceci demonstrated in an ingenious little experiment. For more than twenty years, Ceci had taught the same cla.s.s in developmental psychology at Cornell University. Sometimes he taught it twice in one year, once in the fall semester and again in the spring semester. Using the same course structure, the same lecture outlines, the same material, it was all perfectly routine. But one year, in the break between fall and spring semesters, Ceci and some other professors attended a workshop taught by a professional media consultant. Each professor was videotaped giving a lecture. The consultant critiqued the performance and suggested changes that would make the delivery more expressive and enthusiastic. "Underscore points with hand gestures," the consultant might say, or "Vary the pitch of your voice." The substance of the lectures wasn't discussed. This was strictly about style.

Ceci sensed an opportunity. He would follow the media consultant's advice about speaking and gesturing in the next semester but otherwise teach his cla.s.s exactly as he always had. Then he would compare the student evaluations from the fall and spring semesters. If his ratings were higher in the spring, he would know if the stylistic changes alone had made a difference. And they did. Ceci's ratings improved across the board. Students judged him to be more organized, more accessible, and more tolerant. They even considered him to be more knowledgeable, with his average score on a five-point scale rising from 3.5 to 4.

Ceci's experiment clarified things by taking expert status out of the equation, but it still leaves some ambiguity. What exactly was it in his new speaking style that people responded to? Was he more likable? Or was it simply the greater enthusiasm and confidence he projected? That's not clear. But other research suggests confidence is a critical factor. In one study that examined how one person persuaded another when they disagreed, researchers concluded that "persuasion is a function not of intelligence, prediscussion conviction, position with respect to the issue, manifest ability, or volubility, but of the expression of confidence during the discussion itself." Very simply: Confidence convinces.

Another group of researchers asked people to tackle various problems-math questions, a.n.a.logy puzzles, forecasts-and state how confident they were that the answer they came up with was correct. Then they were put in groups and asked to decide collectively what the answer was and how confident they were that their answer was correct. The researchers found the group responses tended to match those of the most confident person in the group, whether that person was actually right or not. In a third study, people were asked to watch the videotaped evidence of an eyewitness to a crime. The researchers varied ten different variables, including the circ.u.mstances of the crime, how a police lineup was conducted, and the witness's confidence in her own judgment-in one version, the witness says she is 80 percent sure she correctly identified the suspect; in another, she says she is 100 percent sure. The only factor that made a big difference to every measured outcome was confidence. Researchers have also shown that financial advisers who express considerable confidence in their stock forecasts are more trusted than those who are less confident, even when their objective records are the same.

This research, and much more like it, suggests there is a "confidence heuristic": If someone's confidence is high, we believe they are probably right; if they are less certain, we feel they are less reliable. Obviously, this means we deem those who are dead certain the best forecasters, while those who make "probabilistic" calls-"It is probable this will happen but not certain"-must be less accurate, and anyone who dares to say the odds of something are fifty-fifty will be met with scorn. People "took such judgments as indications that the forecasters were either generally incompetent, ignorant of the facts in a given case, or lazy, unwilling to expend the effort required to gather information that would justify greater confidence," one researcher found.

This "confidence heuristic," like the "availability" and other heuristics, isn't necessarily a conscious decision path. We may not actually say to ourselves, "She's so sure of herself she must be right!" It's something that happens at the edge of consciousness, or even without any awareness at all. It is automatic and instantaneous. We simply hear the person speak and suddenly we have the sense that, yes, she is probably right. If someone suggested her confidence played a key role in our conclusion, we may deny it. After all, we didn't think about her confidence; we may not have even noticed her confidence. At least not consciously.

Using confidence as a proxy for accuracy isn't all that unreasonable. In general, people's accuracy really does rise as their confidence increases. The person who mutters, "Gee whiz, I think so, but I'm not sure," probably is less likely to be right than the one who shouts, "I'm right! I'm right! I'd stake my life on it!" Which is why a "confidence heuristic" would work. But as is true with all heuristics, the confidence heuristic is far from perfect.

One problem is overconfidence. As we saw, most people are far too sure of themselves, and this gives us trouble if we use confidence to gauge accuracy. Robert Shiller is an interesting ill.u.s.tration. Aside from his very impressive t.i.tle-Yale economist-Shiller is the very ant.i.thesis of the loud, quick-talking, dead-sure-of-himself pundit. He speaks quietly and is often hesitant, even a little inarticulate. He qualifies his statements and mentions reasons why he might be wrong. He is seldom simple, clear, and certain. But he has a track record that includes correctly calling both the high-tech bubble of the late 1990s and the real estate bubble that followed, and that record got him airtime on business shows normally dominated by c.o.c.ksure pundits. A July 2009 interview on CNBC was typical. The American real estate market was "still in an abysmal situation," Shiller said, but there was a great deal of diversity within the national market and "people have gotten very speculative in their att.i.tudes toward housing." This made it possible that in certain regional markets "there could be another bubble." But "this is not my more probable scenario," he added. It was more likely that prices "will languish for many years." By the standards of TV punditry, it was a nuanced and thoughtful overview of a complex situation plagued with uncertainties. And people hated it. Posted on a Web site, the interview drew 170 comments. Most ignored Shiller altogether and instead offered dead-certain predictions of the sort that are usually heard on TV. Some were contemptuous. "This guy is really hedging his bet. He doesn't want to be underexposed or overexposed. I sure wouldn't take advice from him." Bring on the blowhards. As British politician Norman Lamont once said, admiringly, of one of his favorite newspaper columnists, "He is often wrong but he's never in doubt."

Another problem with the confidence heuristic is that people may look and sound more confident than they really are. Con men do this deliberately. We all do, to some degree. Of course most of us don't do it as brazenly as con men-one hopes-but we all sense intuitively that confidence is convincing. And so, when we are face-to-face with people we want to convince, we downplay our doubts, or bury them entirely, and put on a brave face. And that's before compet.i.tion enters the equation. A financial adviser doesn't just want to convince clients that he can forecast the stock market. He wants to convince clients that he can do it better than other financial advisers. So he beats his chest a little louder than the other guys. But the other financial advisers want to land the same clients, so they answer this chest-beating with even more vigorous displays of bravado. Psychologists Joseph R. Radzevick and Don A. Moore tested this dynamic with an experiment in which people were a.s.signed the role of either "guesser" or "adviser." The job of guessers was to estimate the weight of people in photos. The more accurate they were, the more money they made. The advisers were to provide estimates, including indications of confidence in their accuracy. Guessers were free to choose any adviser's estimate, so advisers made money based on the number of guessers who took their advice. Not surprisingly, advisers were overconfident in the first rounds of the experiment. But they weren't punished for being inaccurate. In fact, guessers preferred the more confident advisers, and advisers responded by getting steadily more confident as the experiment progressed-even though their accuracy never improved. Compet.i.tion "magnifies" overconfidence, the researchers concluded.

Most people are overconfident to begin with. When they try to convince others, they become even more sure of themselves. Reward them for convincing others, and have them compete for those rewards, and it's just a matter of time before they are insisting they are 100 percent certain what the future holds.

TELL ME A STORY.

People love stories, both the listening and the telling. It's a central part of human existence, found in every culture, in every place, in every time. That universality suggests its origins are biological, and therefore evolutionary.

There are many potential advantages storytelling gave our ancestors. It allowed experience to be distilled into knowledge and knowledge to be transmitted. It strengthened social bonds. It provided an opportunity to rehea.r.s.e possible outcomes. But at an even more fundamental level, storytelling is a work-sharing agreement: If I use my brain's "Interpreter" neural network to produce an explanation for one set of facts and you use yours to explain another, we can share our explanations by swapping stories. If all fifty members of the tribe do the same, we'll all get fifty explanations in exchange for doing the heavy lifting on one. That's an efficient way to make sense of the world.

For explanation-sharing to work, however, a story cannot conclude with "I don't know" or "The answer isn't clear." The Interpreter insists on knowing. An explanatory story must deliver. When the movie No Country for Old Men ends as the killer walks off with nothing resolved, it disturbs us because the narrative isn't complete. What happens next? How does it end? This nonending worked for a movie (and novel) that was intended to be unsettling, but a story normally has to wrap everything up and come to a clear conclusion. Only that will satisfy the Interpreter's hunger for order and reason.

Other elements of a good story are as universal as storytelling itself. It has to be about people, not statistics or other abstractions. It should elicit emotion. Surprise is valuable, thanks to our evolved tendency to zero in on novelty. It also helps if the story involves a threat of some kind, thanks to the "negativity bias" mentioned in the last chapter.

"Confirmation bias" also plays a critical role for the very simple reason that none of us is a blank slate. Every human brain is a vast warehouse of beliefs and a.s.sumptions about the world and how it works. Psychologists call these "schemas." We love stories that fit our schemas; they're the cognitive equivalent of beautiful music. But a story that doesn't fit-a story that contradicts basic beliefs-is dissonant. (And n.o.body but a few oddb.a.l.l.s enjoys dissonant music.) This is why there is no such creature as a universally acclaimed pundit. The expert who makes a prediction based on an explanatory story that fits neatly with the basic beliefs of an American free-market enthusiast, for example, is likely to get the attention and applause of American free-market enthusiasts. But that expert is just as likely to get a cold shoulder from European social democrats. Same story, same evidence, same logic, but completely different reactions. This sort of disparity appears routinely. Will man-made climate change savage civilization if we don't act now? Many scientists, activists, and politicians make that case. Some people find their evidence and arguments very compelling. Others snort. Whether a person falls in one camp or the other isn't up to a coin toss. Their prior beliefs-their schemas-make all the difference. If I were to describe an American who thinks gun control doesn't work, Ronald Reagan was a great leader, and international terrorism is a major threat, which side of the climate change debate is he likely to come down on? What about an American who supports strict gun control, thinks Reagan was dishonest and dangerous, and the threat of terrorism is overblown? We all know the answer-the first person is much more likely to snort-even though with regard to evidence and logic, gun control, Ronald Reagan, and terrorism have absolutely nothing to do with climate change. But they do reveal a person's schemas.

The importance of explanatory stories in convincing others cannot be exaggerated. "Imagine that you are the vice-president of a fairly large corporation," researchers asked a group of forty-four advanced university students with some training in basic decision making. As vice-president, you have been given the job of selecting a law firm to put on retainer. Among other things, the firm must be good at predicting the outcome of litigation. So you ask them to review a hundred pending cases in detail, predict whether the plaintiff or the defendant will win, and say how confident they are on a scale from 50 percent to 100 percent. Now, the researchers directed, "outline the strategy you would use in evaluating the accuracy of each firm that took part in the exercise." The big winner? One might think it would be a statistical a.n.a.lysis of the firm's accuracy, but people weren't so interested in that. What they wanted to hear was a good explanation: How did firms make predictions? If the method sounds good, the predictions must be as well.

Statistics be d.a.m.ned. Tell me a story.

As it happens, I've been told that many times, almost word for word, by people who should know. As a money manager based in Houston, Texas, Mike Robertson is one. Handling two billion dollars of other people's cash, Robertson is the fifth-largest independent wealth adviser in the United States. His clients are all multimillionaires; two are billionaires. He knows what it takes to convince very rich people to take his advice. And what it takes is not statistical evidence of sound judgment.

"Do I trust you? Do I think you care about me? Do I think you know what the h.e.l.l you're doing?" That's what matters, Robertson told me. Getting people to say yes requires the ability to connect on a personal level, to make the potential client feel you're caring and trustworthy. Robertson is a big, friendly, likable guy. He's got the human stuff covered. As for competence, he establishes that with a good story-a confident, clear, concise explanation of how he makes decisions that leaves the potential client nodding his head and saying, "Yes, that makes sense." Robertson's story is drawn from the demographic a.n.a.lysis of business guru Harry Dent and it's not complicated: "Instead of coming in with reams of research papers and all this kind of stuff, you sit down and talk about potato chips and motorcycles." Who eats potato chips like crazy? Fourteen-year-olds. "I've had two fourteen-year-olds. I can attest to that." So if demographic projections show growing numbers of fourteen-year-olds in the coming years, you buy potato chip stocks. Same with motorcycles. They may be a young man's dream but Harley-Davidsons are expensive, so Harleys are mainly bought by people in their late forties. "Everybody says, 'Yeah, absolutely. The last guy I saw riding that thing had gray hair.'" So the future number of people in their late forties tells you whether you should get into Harley-Davidson.

Robertson doesn't deny that his story is very simplistic, or that there's far more to his thinking. But he doesn't go further because this story is what people want. Anything more is needless complication. "You don't have to sit there and bring in all these charts and stuff," he says. "If you don't trust me, it won't make a difference."

Fundamentally, says Robertson, convincing others that he has a handle on the future is not a rational process. "Statistics are rational. People are not."

NOW PUT IT ALL TOGETHER AND GO ON THE TONIGHT SHOW.

"This is a little different for us," says Johnny Carson, the legendary host of The Tonight Show. After an hour kibitzing with the comedian Buddy Hackett about such weighty matters as Hackett's hair-"You can comb it till your nose drops off and it stays like that," Hackett observes-Carson will now moderate a debate about the fate of humanity. On one side is the journalist Ben Wattenberg. On the other is a familiar face. "Dr. Paul Ehrlich has been with us a few times before," Carson says. "He's a population biologist at Stanford University and his book The Population Bomb has sold nearly a million copies."

As one might guess from the brown suits, sideburns, and ads for cigarettes-"New Kent Menthol has got it all together!"-this unusual moment in television history took place in August 1970. As always, Carson sat at his desk. Ehrlich took the chair normally occupied by grinning movie stars, while Wattenberg sat on the couch reserved for sidekicks and second-tier guests.

Carson asked Ehrlich to get things started by summarizing the basic argument of his book. "The main premise is there are 3.6 billion people in the world today. We're adding about seventy million a year and that's too many," Ehrlich said. "It's too many because we are getting desperately short of food. Matter of fact, recent indications are that the socalled Green Revolution is going to be less of a success than we thought it was going to be." Ehrlich's deep voice is calm and steady and his words flow smoothly. His left elbow is propped casually on the edge of the chair. He's young, but with his suit and tie, his relaxed confidence, and "DR. PAUL EHRLICH" flashed on-screen, he is every inch an authority. He knows what he's talking about. And he knows what's coming. "The very delicate life-support systems of the planet, the things that supply us with all our food, ultimately with all our oxygen, with all our waste disposal, are now severely threatened. I would say that trained ecologists are divided into two schools. There's the optimistic school, of which I'm a member, that thinks that if we should stop what we're doing now very rapidly, that there's some chance that we'll prevent a breakdown of these systems. There are others who feel that the changes in the weather, that the permanent poisons that we've already added to the planet, have already set in train the sequence of events that will lead to disaster. They feel it's already too late. I think the only practical thing to do is pretend that it's not too late. So we're in deep trouble and I'm worried about it."

Wattenberg gives a decent reply, but it's obvious from the beginning he's no match. His delivery is hesitant, and his message is diffuse, unfocused. He accepts some of what Ehrlich is saying but suggests it's "overstated" and should be more balanced, but his alternative vision is as fuzzy as Ehrlich's is sharp and vivid.

Wattenberg tries gamely to parry Ehrlich's attacks but Ehrlich is far too quick-witted. When Wattenberg claims, "The new cars have sixty percent less pollution," Ehrlich shoots back, "That's nonsense."

Wattenberg looks a little stunned. "Well, that's my data," he says.

"That's not your data. That's the automobile manufacturers' data for cars that have never run anywhere," Ehrlich responds.

Wattenberg leans back in his chair. "I can't debate the scientific data with you."

Ehrlich smiles gently. "True," he says. The audience laughs. It's like watching Muhammad Ali float like a b.u.t.terfly and sting like a bee. You can't help but feel sorry for the poor chump in the ring.

Ehrlich delivers all the elements that make a powerful presentation-evident expertise, confidence, clarity, enthusiasm, and charm. He tells a story that is simple and clear and fits the audience's beliefs and current concerns. ("I consider the Vietnam War and racism to be part of the same mess," Ehrlich says. "So it's really one big crisis.") And Ehrlich is able to move from one element to the other as smoothly as Johnny Carson working his way through the opening monologue.

He is also unflappable. When Wattenberg delivers his best line-"I didn't believe it when Chicken Little said the sky was falling and I don't believe it when Dr. Ehrlich says it"-Ehrlich merely smiles. "I am a doomsayer because I do believe doom is coming," he responds. "And I would say Mr. Wattenberg is essentially in the position of the person saying, 'Well, you're trying to sell me life insurance but I've never needed any before.'" The audience roars. Johnny Carson laughs harder than he did at anything Buddy Hackett said. Even Wattenberg chuckles wryly.

Before the laughter fades, Ehrlich grins and says to the audience, "Can I turn professorial for a moment?" He holds up a blank notepad. "I want to produce a very simple equation." With a marker, he writes "D = N I." The D stands for damage to the earth's life-support systems, he says. The N is the number of people. The I is the negative impact of each person on the environment. "In other words, you've got two factors: How many people you've got and what the people do. Now, it's quite true that if we stop population growth now and leave our environmental impact high, you go down the tubes. If you cut down the environmental impact of each person, do things differently, and let the population continue to grow, the product remains the same and you go down the tubes. What you've got to do is operate on both of these things at the same time. You've got to both reduce the size of the population and you've got to reduce the impact of the individual."

It's a tour de force. Ehrlich has swatted away his opponent's best line, cracked up the audience, and then, after pivoting smoothly to the role of esteemed academic expert, delivered a lecture that is simple, clear, and compelling. Wattenberg lights a cigar and smokes in silence. He knows it's over. Ehrlich won.

When Buddy Hackett surprises everyone by wandering onto the set as the debate is wrapping up, Ehrlich finishes with an encore: Jumping up, he asks, "Can I have a kiss, Buddy? My mother wouldn't believe it." Buddy obliges. The audience applauds madly.

As much as the events and culture of the era, Paul Ehrlich's style explains the enormous audience he attracted. Today, The Population Bomb is thought of as a pioneering work that ushered in fears of overpopulation and famine. It was not. As a young undergraduate, Ehrlich was himself inspired by two books-Our Plundered Planet and Road to Survival-both published in 1948. By the time Ehrlich wrote his blockbuster in 1968, the "population explosion" had long been a routine topic of discussion in governments, think tanks, and the ma.s.s media-it made the cover of Time on January 11, 1960-and Ehrlich's basic arguments had been made in a very long list of books. What was different about The Population Bomb was its author.

Paul Ehrlich is a gregarious and delightful man, a natural performer. "Ask me questions and I'll try to give you honest answers or clever lies," he kids when I call for an interview. He may be seventy-seven years old, but he's as sharp as ever. "I was teaching a course in evolution at Stanford," he recalls when I ask how he became one of the most famous public intellectuals of his time. "I spent the first nine weeks of the tenweek course on where we came from and the last week on where we were going. The last week got to be popular on campus and people told their parents and I started to be asked to speak to alumni groups and I was finally asked to speak to the Commonwealth Club." That lecture was broadcast on radio, and the executive director of the Sierra Club heard it. He contacted a publisher he knew, and Ehrlich agreed to write a popular book. Ehrlich wanted to call it Population, Resources, and Environment-a rare case in which his sense of how to appeal to a ma.s.s audience failed him. But the publisher caught the mistake. Let's call it The Population Bomb, he said.

At the same time, Ehrlich recalls, "I got involved with some colleagues at Yale and started Zero Population Growth [ZPG] as an NGO. It didn't go anywhere until Arthur G.o.dfrey sent a copy of The Population Bomb to Johnny Carson." A coveted invitation to The Tonight Show arrived and under the bright studio lights Ehrlich and Carson hit it off. "We had a wonderful time. We went for a long time." Carson let Ehrlich plug ZPG, to great effect. "When I went on The Tonight Show, ZPG had six chapters and six hundred members. Johnny had me back about three times in a few months and let me give the address each time. And it went to six hundred chapters and something like sixty thousand members." Ehrlich was suddenly a rock star. "Articulate, witty, and with a flair for the dramatic, Ehrlich has scored well with personal appearances," noted a reporter in 1970. "Campus appearances invariably draw mobs. Recently, more than 2,000 were turned away from a lecture at Berkeley after the university administration had first a.s.signed him to a seminar room for 30, then moved his talk to an auditorium that seated 500." In total, the confident, chatty, charming professor with the dire predictions was invited to talk with Johnny Carson and his millions of viewers more than twenty times.

For experts who want the public's attention, Paul Ehrlich is the gold standard. Be articulate, enthusiastic, and authoritative. Be likable. See things through a single a.n.a.lytical lens and craft an explanatory story that is simple, clear, conclusive, and compelling. Do not doubt yourself. Do not acknowledge mistakes. And never, ever say, "I don't know."

People unsure of the future want to hear from confident experts who tell a good story, and Paul Ehrlich was among the very best. The fact that his predictions were mostly wrong didn't change that in the slightest.

THE EHRLICH LESSON.

Whether they know about Paul Ehrlich or not, every successful communicator knows the lesson of his example.

Politicians know the Ehrlich lesson: "Confident, clear, and simple" is standard operating procedure in political communications. When the Obama administration wanted a stimulus package approved, it issued a chart with two lines-the first was the American unemployment rate over the coming months and years with the stimulus package, while the second line, much higher than the first, showed unemployment without the stimulus package. There was no ambiguity, no uncertainty. Just two lines stretching into the future. Which line would you like? Time pa.s.sed. And reality scoffed: The unemployment rate shot past both lines. But politicians know predictions are about convincing people today, not being right tomorrow. When Danish prime minister Anders Fogh Rasmussen addressed scientists in the lead-up to the Copenhagen climate conference of 2009, he implored them to give politicians the sort of conclusions they could use to sell the public on a course of action. "I need fixed targets and certain figures," he said, "and not too many considerations on uncertainty and risk and things like that." Accuracy isn't imperative in politics, certainty is.

Other policy makers know this lesson too. That's why they engage in "overarguing," as Greg Treverton, a former vice chair of the U.S. National Intelligence Council, calls it. "Senior decision makers communicate with one another, their organization, and the public by using narratives-stories that combine statements of goals, a.s.sumptions about the world, and plans of action. When they craft these narratives, policy-makers strive to appear more certain than they actually are, knowing full well that a storyline acknowledging their underlying uncertainty would undermine their authority in policy debates."

Many scientists have also realized that speaking as science itself speaks-complex, ambiguous, uncertain-is a sure way to be ignored. If they want funding and public support, they have to follow the lead of their colleague Dr. Ehrlich and deliver bold, confident predictions. The same is true of activists and nongovernmental organizations. And "futurists," forecasters, and other gurus who work the corporate lecture circuit. It's especially true among those who put together business forecasts; managers and executives know that someone who wishes to keep her job will not announce to a roomful of superiors that the future is complex, unpredictable, and out of her control-no matter how true that may be.

But more than all the rest, the news media know the lesson of Paul Ehrlich.

The worst offenders are the opinion columnists, talk-show hosts, and bloggers who issue predictions about as often as most people breathe. "It's the least intellectually taxing question that somebody can ask or the least intellectually taxing answer somebody can give, and one of the least challenging discussions for an audience," noted veteran journalist Jeff Greenfield. "You're not actually talking about history, culture, facts. You're talking about what one of my law school professors used to call breezy speculation, or the initials thereof. And in an era when there's more and more talk on cable, the cheapest thing you can do on television is to bring people into the studio and have them talk, and the easiest kind of talk is to say what's going to happen." Imagine something that would be interesting if it happened; let your Interpreter cobble together an explanation for why it could happen; spin it into an entertaining story. In far too many cases, that's all there is to pundits' predictions. "We are very good at making up these grand theories about what's about to happen," writes Paul Wells, a Canadian political journalist. "After they don't happen, we are at least as good at forgetting our earlier mistaken cert.i.tude."

Pundits aside, the news is filled with simple, clear, and confident predictions. It's the entrenched way of doing things-so entrenched, in fact, that even a disastrous failure of forecasting like the crash of 2008 cannot convince the media to do things differently. The cover of the July 2009 edition of Money magazine-to take one of literally hundreds of examples-promised to reveal "How to Profit in the New Economy: The Five Big Changes Ahead, and What They Mean for Your Investing, Spending, and Career." Alongside it on the newsstand, BusinessWeek offered a look at "Housing Market 2012." Neither article mentioned the catastrophic failure of articles like these to see 2008 coming. Instead, they tell readers that the American economy will grow almost 6 percent in 2012 and median home values will rise 7 percent. There will be strong job growth in Colorado. Housing supply will be tight in Oregon. And in 2019 the percentage of the American workforce that is freelance, temporary, or self-employed will be precisely 40 percent.

This sort of certainty is so routine we seldom stop to marvel at how ridiculous it is. And it's not just the business press-it's even found in science news. The May 2, 2009, cover of the popular British science magazine New Scientist blared, "Swine Flu: Where It Came from, Where It Will End, How to Protect Yourself." At the time, the first wave of the flu had just started spreading outward from Mexico, and no scientist would have dared suggest the future course of the virus was so neatly predictable that they could say "where it would end." This critical fact can be found deep in the text of the article-"All this means the virus could go pandemic. Or it might not"-but the cover is not marred by a trace of uncertainty. Neither is the headline: "The Predictable Pandemic."

"Journalists, whose base currency is 'fact,' have little patience with contingent possibilities. Most are disposed to seeing their job as turning the unknown into the known," observed John Huxford, director of journalism studies at Villanova University. In 2008, when the National Academy of Sciences (NAS) brought together scientists in Washington, DC, to discuss the risks posed by an electromagnetic "solar storm," the group agreed the threat was real. A particularly severe storm could damage electrical grids, the report noted. However, "it is difficult to understand, much less predict, the consequences." But when the New Scientist reported on the group's conclusions, that uncertainty was swept aside. Instead, an extreme scenario prepared by private consultants for the NAS committee became the dramatic foundation of the entire article. "It is midnight on 22 September 2012 and the skies above Manhattan are filled with a flickering curtain of colorful light," it begins. "All the lights in the state go out. Within 90 seconds, the entire eastern half of the U.S. is without power. A year later and millions of Americans are dead and the nation's infrastructure lies in tatters. The World Bank declares America a developing nation. . . . It sounds ridiculous. Surely the sun couldn't create so profound a disaster on Earth. Yet an extraordinary report funded by NASA and issued by the U.S. National Academy of Sciences in January this year claims it could do just that." Well, it could.

Far too often, the track record of the experts offering predictions simply isn't a concern for the producers and editors who shape the news. In the dismal winter of 2009, like a ghost from the past, Howard Ruff appeared on CNBC and other business shows. The author of the 1978 best seller How to Prosper During the Coming Bad Years had a huge following during the inflation-racked years of the late 1970s and early 1980s, but when inflation collapsed, the economy turned around, stock markets got bullish, and Ruff looked out of touch. When he kept calling for inflation, recession, and misery all through the 1980s and 1990s, he faded from public view. But in 2009, after more than three decades sounding the alarm about hard times ahead, Ruff was back on TV at the age of seventy-eight. In an interview later that year, he told me it wasn't because TV producers were taking his ideas seriously again now that hard times had actually come. "I spent some money on a PR firm," he said. Ruff had a good story to tell, and he was available. That was enough.

Experts who have neither a thrilling story nor money for a PR firm find the media simply isn't interested in hearing from them. In 1999, Ross Anderson, a professor of computer science at Cambridge University, was that sort of expert. His not-so-thrilling story involved Y2K, the programming glitch-years had been recorded with two digits, not four-which could cause computers to mistake the year 2000 for 1900 and do all sorts of odd things as a result. Systems would fail and the failures would cascade, putting corporations and governments in danger, according to a large and growing number of consultants who offered to fix the problem for a fee as big as the threat. Many insisted it was even worse than that. The economy would suffer, vital services would break down, and there could be social unrest. It could even cause the end of the world as we know it-or "TEOTWAWKI," as believers liked to call it. Political leaders didn't go that far, but most Western governments took the threat very seriously. "I do not want to seem irrational or a prophet of doom, but there is a possibility of riots," the head of the UK's task force on Y2K said in 1997. No one really knows how much was spent on the problem, but most estimates put it in the hundreds of billions of dollars.

All big British inst.i.tutions were concerned about Y2K, but Cambridge had more reason to be worried than most. With seven thousand employees and a vast array of computer systems-operating in everything from state-of-the-art scientific labs to advanced medical treatments to an aerial photography unit-the opportunities for failure were legion. A readiness committee was struck, with Ross Anderson as the technical adviser, and Anderson got an idea. Given the vast diversity of Cambridge's computer systems, a thorough a.n.a.lysis of each system's vulnerability to the Millennium Bug, and the cost to fix them, would provide a reasonably good sense of the level of vulnerability in a great many other settings. That may seem elementary, but in fact, little of this sort of hard a.n.a.lysis had been done. Almost all the noise around Y2K was based on little more than speculation, some of it informed, much of it not. So Anderson got to work, and he soon had good news. The threat wasn't nearly as bad as it was portrayed in the media and fixing it had cost Cambridge less than a hundred thousand pounds.

This was important information, Anderson thought. He brought it to the attention of the British civil service, but no one wanted to touch it. "Officials seemed worried in case the precautions they'd taken were criticized as unnecessary," Anderson later wrote. So he contacted "a number of prominent journalists." Still, no one was interested. Why would they be? Anderson wasn't saying Y2K was a hurricane about to make landfall, or that the whole thing was a fraud. He was saying there was a real problem but it probably wasn't severe and could be fixed at reasonable cost. That's a lousy story. It's a bit complicated, and it's boring. But Anderson didn't give up. Two weeks before the moment of truth, the university's press office sent a media release to hundreds of news organizations. "I can't predict the future but now we've gone out and looked at a lot of systems, I'm much less worried than I used to be," Anderson said in the release. "Lots of things may break, but few of them will actually matter." That release generated a grand total of four radio interviews, three of them brief-no more than a few raindrops in the raging monsoon of Y2K coverage.

Ross Anderson was too modest. He did predict the future: His study was a remarkably accurate forecast of what actually happened when 1999 became 2000. But that didn't matter, any more than it mattered that Paul Ehrlich's predictions failed. It's the Paul Ehrlichs the media and the public want to hear from, not the Ross Andersons.

The incentives are as obvious as they are huge-TV spots, newspaper interviews, best-selling books, lucrative lectures, corporate consulting contracts, the attention of the public and people who matter. Anyone who wants these things, whether for selfish or selfless reasons, must follow the lead of Paul Ehrlich. "Breathless hype," observed journalist Michael Lind, is what "wins readers for journals and newspapers and makes the careers of pundits who aspire to bloviate at Davos before an audience of the trendy rich."

Be simple, clear, and confident. Be extreme. Be a good storyteller.

Think and talk like a hedgehog.

A SWING AND A MISS! AND n.o.bODY CARES. . . .

Still, there's a problem. People may want to hear from absurdly confident experts, and the media may make such experts stars. But stardom means lots of people are listening. If the expert is lousy at forecasting the future, lots of people will see his predictions crash and burn. He will be humiliated and they will stop listening. So for all the incentives pushing experts to pump up their predictions, there must be a countervailing incentive to tone it down. Or so we might think. In reality, there is little accountability for predictions, and while big calls that go bad should damage the reputations of those who make them, they seldom do.

One of the most extreme voices in the Y2K fias...o...b..longed to social critic James Howard Kunstler. "If nothing else, I expect Y2K to destabilize world petroleum markets," Kunstler wrote, and the effects of that will be as bad as, or worse than, those of the 1973 oil embargo. Industrial agriculture will collapse. "Spectacular dysfunction" will plague car-dependent cities. Supply chains will crumble. "I doubt that the Wal-Marts and Kmarts of the land will survive Y2K." That was the minimumdamage outcome. He actually expected things to get much worse. "The aggregate economic effect of these [computer system] failures will be a worldwide deflationary depression. I will not be surprised if it is as bad in terms of unemployment and hardship as the 1930s." Expect "international political and military mischief." And look out for the United States to seek salvation in some "charismatic political maniac."

In the long and rich history of failed predictions, few forecasts have been more wrong. But Kunstler shrugged it off. So what if Y2K didn't destroy civilization? Something would. And soon. In 2005, Kunstler published The Long Emergency, a frantic, sweaty tour of all the things that could go horribly awry, and surely will, plunging us all into "an abyss of economic and political disorder on a scale that no one has ever seen before." One might think that after Kunstler's Y2K pratfall, people wouldn't pay for him to be their tour guide to the future, but The Long Emergency was a best seller and Kunstler-a wildly entertaining speaker-became a fixture on the lecture circuit, where he is paid significant amounts of money to tell audiences they are doomed.

The 2003 invasion of Iraq left failed predictions lying about the landscape like burnt-out tanks. The first army of pundits defeated by reality were the pessimists who thought the invasion itself would turn into a long, ma.s.sive battle between two huge armies, or that the battle for Baghdad would become "another Stalingrad." Then it was the optimists who predicted that Saddam Hussein's vast a.r.s.enal of weapons of ma.s.s destruction would be unearthed, that the American soldiers would be embraced as liberators, that a flourishing democracy would get up and running quickly, that a reborn Iraq would cause the spirit of reform to sweep across the Middle East, that the terrorist "swamp" would be drained. And a square in Baghdad would be named after George W. Bush by the grateful Iraqi people. They were all wrong, but the careers of the pundits who made these predictions were not among the war's casualties. Consider David Brooks and William Kristol. Writing in The Weekly Standard prior to the invasion, Brooks and Kristol relentlessly urged the United States to war. A failure of nerve would lead to disaster, they insisted, while an invasion would work wonders in the Middle East and at home. "If the effort to oust Saddam fails, we will be back in the 1970s," wrote Brooks. "We will live in a nation crippled by self-doubt. If we succeed, we will be a nation infused with confidence. We will have done a great thing for the world, and other great things will await." In March 2003, on the eve of the invasion, both men put it all on the line. "Events will soon reveal who was right, Bush or Chirac," Brooks wrote. "History and reality are about to weigh in," Kristol added, "and we are inclined to simply let them render their verdicts."

History and reality did weigh in: Far from the quick and glorious adventure Brooks and Kristol expected, the invasion of Iraq became a nightmarish occupation that drained American blood, treasure, and confidence. And yet, this had no apparent effect on the career of either man. In September 2003, David Brooks was hired to write a column in The New York Times, the world's leading newspaper, instantly becoming one of the most influential voices in journalism. In December 2007, Kristol received the same punishment.

The case of Brooks and Kristol is extreme but far from unique. d.i.c.k Morris, the former Bill Clinton pollster who is now a conservative commentator, routinely makes predictions that are as confidently expressed as they are wrong-a cla.s.sic being his 2006 book Condi vs. Hillary, which foresaw a 2008 presidential election contest between Democratic nominee Hillary Clinton and Republican candidate Condoleezza Rice. Morris's mistakes apparently make no difference to his demand as a televised talking head. The same is true of the Anglo-Canadian-American pundit Mark Steyn. As the British writer Geoffrey Wheatcroft helpfully summarized in 2006: "Apart from predicting that George Bush would win the 2000 presidential election in a landslide, Steyn said at regular intervals that Osama bin Laden 'will remain dead.' Weeks after the invasion of Iraq, he a.s.sured his readers that there would be 'no widespread resentment or resistance of the Western military presence'; in December 2003, he wrote that 'another six weeks of insurgency sounds about right, after which it will peter out'; and the following March he insisted that 'I don't think it's possible for anyone who looks at Iraq honestly to see it as anything other than a success story.'" Steyn's most endearing quality, Wheatcroft noted with dry British wit, is his "enviable self-confidence." And apparently, that is enough. In 2006, Steyn published America Alone: The End of the World as We Know It, which predicted that Europe would soon be swamped by fecund Muslims, leaving the United States alone in the struggle to save Western civilization from the Islamic hordes. Despite its author's demonstrated inability to predict matters somewhat less complicated than the fate of continents and civilizations, America Alone became a New York Times best seller and a hugely influential tract among American conservatives.

And finally, there's Paul Ehrlich himself. It was clear by the 1990s that the dire forecasts Ehrlich had made in the 1970s had come to nothing, but that didn't slow the shower of awards Ehrlich enjoyed that decade. There was the Gold Medal Award of the World Wildlife Fund International; the John Muir Award of the Sierra Club; the Volvo Environmental Prize; the Blue Planet Prize of the Asahi Gla.s.s Foundation; the Tyler Prize from the University of Southern California; the Heinz Award, created by Teresa Heinz, billionaire wife of U.S. senator John Kerry; the Sasakawa Prize from the United Nations; and the MacArthur Fellowship, nicknamed the "Genius Award." Ehrlich also won the Crafoord Prize of the Royal Swedish Academy of Sciences, which is widely considered the n.o.bel of environmentalism. Many of these honors stemmed, at least in part, from Ehrlich's research as a biologist, which is respected in his field, but they often were given for popular work like The Population Bomb and The End of Affluence. The Crafoord Prize citation specifically noted Ehrlich's "numerous books on global environmental problems, such as overpopulation, resource depletion, nuclear winter, and greenhouse effects. It has been said that with Rachel Carson he is the one person with the greatest importance for present-day awareness of the imminent global catastrophe." The citation does not say precisely what that imminent global catastrophe may be; two decades later it's still not clear. No matter. In 2009, the Programme for Sustainability Leadership at the University of Cambridge surveyed its "alumni network of over 2,000 senior leaders from around the world" and asked them to choose the best books ever written about sustainability. The result was a list of fifty books the university billed as "the wisdom of our age." The Population Bomb was number four.

Ehrlich's colleague John Holdren-who cowrote many pessimistic and prediction-filled articles with Ehrlich in the 1970s and 1980s-got a slightly tougher time of it when he faced a Senate confirmation hearing in 2009. In a 1971 article cowritten with Paul Ehrlich, Holdren had declared that "some form of ecocatastrophe, if not thermonuclear war, seems almost certain to overtake us before the end of the century." Senator David Vitter asked Holdren if he thought that "was a responsible" thing to write. "First of all, I guess I would say that one of the things I've learned in the intervening nearly four decades is that predictions about the future are difficult," Holdren joked. "That was a statement which, at least, at the age of twenty-six, I had the good sense to hedge by saying 'almost certain.'" And that was that. The Senate approved Holdren's appointment, making him science adviser to the president of the United States.

It seems we take predictions very seriously, until they don't pan out. Then they're trivia.

IT'S A HIT! AND THE CROWD GOES WILD!

People may ignore misses, but a prediction that succeeds is another matter entirely. We not only pay attention to hits when they happen in front of our eyes, we go looking for them. We even fabricate them.

Whenever a major event happens, a hunt begins. Who called it? Someone must have-because a major event that was not predicted would lead to the psychologically disturbing conclusion that the world is in some degree unpredictable, unknowable, and uncontrollable. After the terrorist attacks of September 11, 2001, the need to know someone had predicted the awful event was palpable, and within days, an e-mail flashed from person to person, around the world, with an answer: Nostradamus had called it. As usual with chain e-mails, it mutated as it was pa.s.sed along, but the core of the e-mail was this statement, in typically cryptic language: "In the City of G.o.d, there will be great thunder. Two Brothers torn apart by Chaos. While the fortress endures, the great leader will succ.u.mb." People were stunned. "Two brothers torn apart by Chaos"? That has to be the Twin Towers! The news spread almost as fast as news of the event itself. But if people had been even slightly skeptical and made a quick Google search, they would have discovered that Nostradamus didn't write those words. A high school student in Canada did. His name is Neil Marshall. In 1997, as an a.s.signment for a co-op work placement, he created a Web site. At the time, he was annoyed that so many people believe that the notoriously vague p.r.o.nouncements of Nostradamus predict anything, so he wrote "a little piece on why I think Nostradamus is a crock," including some ominous p.r.o.nouncements he concocted in the style of the sixteenth-century Frenchman. Almost immediately after the 9/11 attack, someone either deliberately misrepresented Marshall's mockery, or they misunderstood what he had written. Either way, people got what they wanted-an accurate prediction of the disaster-and they pa.s.sed it along to millions of others hungry to believe, thus making Marshall's point in spectacular fashion.

The hunt for the successful prediction seldom comes up with such a patently absurd result, but it always comes up with something. After the crash of 2008, the lucky Nostradamus was Peter Schiff, who went from being just another talking head on business shows to a guru with a best-selling book and the sort of fame that makes it impossible to take a stroll in Manhattan without someone wanting to shake your hand. What's intriguing about these successes is that they are universal. No matter what happens, someone turns up to claim the prize. What this phenomenon demonstrates is not that the universe is predictable but that vast numbers of people are making predictions and, like lotteries, when such large numbers are involved, the chances of someone winning the grand prize are high even though the chances of any particular person winning are tiny. Imagine, for example, a parallel universe in which 2008 was essentially the opposite of the year we experienced. Economies roared, real estates values took off, and stock markets went through the roof. In that universe Peter Schiff would have continued to enjoy undisturbed strolls in Manhattan. But there would still be a man anointed as the guru who called it: His name is Robert Zuccaro, who had taken the daring gamble some years before of publishing a book called Dow 30, 000 by 2008. Of course, it's always possible that someone hailed for predicting a major change really did call the change in a meaningful sense, but the more likely explanation is that he, like a lottery winner, got lucky. And he's likely to get lucky a second time because, thanks to the "illusion of prediction" and our poor intuitive sense of randomness, people seldom attribute a predictive hit to anything but skill.

What makes this ma.s.s delusion possible is the different emphasis we put on predictions that hit and those that miss. We ignore misses, even when they lie scattered by the dozen at our feet; we celebrate hits, even when we have to hunt for them and pretend there was more to them than luck. This discrepancy is so extreme and pervasive, it even has a name. It's the "Jeane Dixon Effect," coined by the mathematician John Allen Paulos in honor of the American psychic. Dixon was renowned for having made several accurate predictions, including the a.s.sa.s.sination of President John F. Kennedy, or so the media often said. Close examination of these alleged hits suggests there is much less to them than meets the eye. But more importantly, the very long list of Dixon's misses-the USSR would put the first man on the moon, China would start a world war in 1958, Richard Nixon would win the election of 1960, et cetera-was simply ignored.

This is a little odd, when you think about it. We don't just want predictions, after all. We want accurate predictions. But it's impossible to judge the accuracy of predictions if we zero in on the hits and ignore the misses. So why is so much attention given to hits while misses usually are treated as trivia or ignored altogether?

CUI BONO?.

The most obvious factor is self-interest. When there's a hit, there is also a guy who wants you to know he nailed it; when there's a miss, the responsible party would very much like the prediction to be quietly forgotten. You can be sure Robert Zuccaro would prefer that his name not appear in this book.

The selective editing of predictions is particularly easy for newspaper columnists, who need only to stay on a topic when events confirm their predictions and switch when they don't-as veteran New York Times columnist Anthony Lewis did to great effect. Lewis started writing about population growth, food shortages, and the coming age of scarcity in the late 1960s. In 1969, he described Paul Ehrlich's predictions as "frighteningly convincing," and his praise for The Limits to Growth report-"One of the most important doc.u.ments of our age!"-was emblazoned prominently on the cover of the paperback. When the oil embargo of 1973 caused gas stations to run dry, store shelves to empty, and prices to soar, Lewis declared it to be proof the Ca.s.sandras were right. "The authors [of The Limits to Growth] understandably find some grim vindication in events," he noted, before going on to quote Dennis Meadows saying things would only get worse. And again, in 1981, as the American economy sank into the most severe recession since the Great Depression and oil prices soared to previously unimaginable highs, Lewis gave his readers a stern I-told-you-so. "When the question of resources was introduced to public consciousness in the book The Limits to Growth, critics mocked its warning. They said economic mechanisms and human effort could always overcome scarcity. But all around us now, less than a decade later, we see the effects of resource limits. Oil is the obvious example. The pressure of demand, to which the producers reacted as the textbooks said they should, has made oil an expensive commodity. We know it is going to become more expensive." But it didn't become more expensive. In fact, four years later, the price of oil collapsed, and it stayed in the bas.e.m.e.nt for two decades. But as far as I can tell, Lewis didn't write about that. Some of his other omissions are even more startling. In 1974, with the food crisis mounting, Lewis wrote repeatedly about the dire situation in South Asia. In November 1974, for example, he wrote that "to avoid starvation deaths in the tens of millions, South Asia will depend increasingly on outside food aid. By early in the next century, on the population projections, the aid needed would equal total United States agricultural production." In 1975-the very next year-India did well enough to decline all international food aid. In the years that