Super Freakonomics - Part 9
Library

Part 9

This isn't to say there's no difference between the best and worst doctors in the ER. (And no, we're not going to name them.) In a given year, an excellent ER doctor's patients will have a twelve-month death rate that is nearly 10 percent lower than the average. This may not sound like much, but in a busy ER with tens of thousands of patients, an excellent doctor might save six or seven lives a year relative to the worst doctor.

Interestingly, health outcomes are largely uncorrelated to spending. This means the best doctors don't spend any more money-for tests, hospital admittance, and so on-than the lesser doctors. This is worth pondering in an era when higher health-care spending is widely thought to produce better health-care outcomes. In the United States, the health-care sector accounts for more than 16 percent of GDP, up from 5 percent in 1960, and is projected to reach 20 percent by 2015.

So what are the characteristics of the best doctors?

For the most part, our findings aren't very surprising. An excellent doctor is disproportionately likely to have attended a top-ranked medical school and served a residency at a prestigious hospital. More experience is also valuable: an extra ten years on the job yields the same benefit as having served a residency at a top hospital.

And oh yes: you also want your ER doctor to be a woman. It may have been bad for America's schoolchildren when so many smart women pa.s.sed up teaching jobs to go to medical school, but it's good to know that, in our a.n.a.lysis at least, such women are slightly better than their male counterparts at keeping people alive.

One factor that doesn't seem to matter is whether a doctor is highly rated by his or her colleagues. We asked Feied and the other head physicians at WHC to name the best docs in the ER. The ones they chose turned out to be no better than average at lowering death rates. They were, however, good at spending less money per patient.

So the particular doctor you draw in the ER does matter-but, in the broader scheme of things, not nearly as much as other factors: your ailment, your gender (women are much less likely than men to die within a year of visiting the ER), or your income level (poor patients are much more likely to die than rich ones).

The best news is that most people who are rushed to the ER and think they are going to die are in little danger of dying at all, at least not any time soon.

In fact, they might have been better off if they simply stayed at home. Consider the evidence from a series of widespread doctor strikes in Los Angeles, Israel, and Colombia. It turns out that the death rate dropped significantly in those places, anywhere from 18 percent to 50 percent, when the doctors stopped working!

This effect might be partially explained by patients' putting off elective surgery during the strike. That's what Craig Feied first thought when he read the literature. But he had a chance to observe a similar phenomenon firsthand when a lot of Washington doctors left town at the same time for a medical convention. The result: an across-the-board drop in mortality.

"When there are too many physician-patient interactions, the amplitude gets turned up on everything," he says. "More people with nonfatal problems are taking more medications and having more procedures, many of which are not really helpful and a few of which are harmful, while the people with really fatal illnesses are rarely cured and ultimately die anyway."

So it may be that going to the hospital slightly increases your odds of surviving if you've got a serious problem but increases your odds of dying if you don't. Such are the vagaries of life.

Meanwhile, there are some ways to extend your life span that have nothing to do with going to the hospital. You could, for instance, win a n.o.bel Prize. An a.n.a.lysis covering fifty years of the n.o.bels in chemistry and physics found that the winners lived longer than those who were merely nominated. (So much for the Hollywood wisdom of "It's an honor just to be nominated.") Nor was the winners' longevity a function of the n.o.bel Prize money. "Status seems to work a kind of health-giving magic," says Andrew Oswald, one of the study's authors. "Walking across that platform in Stockholm apparently adds about two years to a scientist's life span."

You could also get elected to the Baseball Hall of Fame. A similar a.n.a.lysis shows that men who are voted into the Hall outlive those who are narrowly omitted.

But what about those of us who aren't exceptional at science or sport? Well, you could purchase an annuity, a contract that pays off a set amount of income each year but only as long as you stay alive. People who buy annuities, it turns out, live longer than people who don't, and not because the people who buy annuities are healthier to start with. The evidence suggests that an annuity's steady payout provides a little extra incentive to keep chugging along.

Religion also seems to help. A study of more than 2,800 elderly Christians and Jews found that they were more likely to die in the thirty days after their respective major holidays than in the thirty days before. (One piece of evidence proving a causal link: Jews had no aversion to dying in the thirty days before a Christian holiday, nor did Christians disproportionately outlast the Jewish holidays.) In a similar vein, longtime friends and rivals Thomas Jefferson and John Adams each valiantly struggled to forestall death until they'd reached an important landmark. They expired within fifteen hours of each other on July 4, 1826, the fiftieth anniversary of the ratification of the Declaration of Independence.

Holding off death by even a single day can sometimes be worth millions of dollars. Consider the estate tax, which is imposed on the taxable estate of a person upon his or her death. In the United States, the rate in recent years was 45 percent, with an exemption for the first $2 million. In 2009, however, the exemption jumped to $3.5 million-which meant that the heirs of a rich, dying parent had about 1.5 million reasons to console themselves if said parent died on the first day of 2009 rather than the last day of 2008. With this incentive, it's not hard to imagine such heirs giving their parent the best medical care money could buy, at least through the end of the year. Indeed, two Australian scholars found that when their nation abolished its inheritance tax in 1979, a disproportionately high number of people died in the week after the abolition as compared with the week before.

For a time, it looked as if the U.S. estate tax would be temporarily abolished for one year, in 2010. (This was the product of a bipartisan hissy fit in Washington, which, as of this writing, appears to have been resolved.) If the tax had been suspended, a parent worth $100 million who died in 2010 could have pa.s.sed along all $100 million to his or her heirs. But, with a scheduled resumption of the tax in 2011, such heirs would have surrendered more than $40 million if their parent had the temerity to die even one day too late. Perhaps the bickering politicians decided to smooth out the tax law when they realized how many a.s.sisted suicides they might have been responsible for during the waning weeks of 2010.

Most people want to fend off death no matter the cost. More than $40 billion is spent worldwide each year on cancer drugs. In the United States, they const.i.tute the second-largest category of pharmaceutical sales, after heart drugs, and are growing twice as fast as the rest of the market. The bulk of this spending goes to chemotherapy, which is used in a variety of ways and has proven effective on some cancers, including leukemia, lymphoma, Hodgkin's disease, and testicular cancer, especially if these cancers are detected early.

But in most other cases, chemotherapy is remarkably ineffective. An exhaustive a.n.a.lysis of cancer treatment in the United States and Australia showed that the five-year survival rate for all patients was about 63 percent but that chemotherapy contributed barely 2 percent to this result. There is a long list of cancers for which chemotherapy had zero discernible effect, including multiple myeloma, soft-tissue sarcoma, melanoma of the skin, and cancers of the pancreas, uterus, prostate, bladder, and kidney.

Consider lung cancer, by far the most prevalent fatal cancer, killing more than 150,000 people a year in the United States. A typical chemotherapy regime for non-small-cell lung cancer costs more than $40,000 but helps extend a patient's life by an average of just two months. Thomas J. Smith, a highly regarded oncology researcher and clinician at Virginia Commonwealth University, examined a promising new chemotherapy treatment for metastasized breast cancer and found that each additional year of healthy life gained from it costs $360,000-if such a gain could actually be had. Unfortunately, it couldn't: the new treatment typically extended a patient's life by less than two months.

Costs like these put a tremendous strain on the entire health-care system. Smith points out that cancer patients make up 20 percent of Medicare cases but consume 40 percent of the Medicare drug budget.

Some oncologists argue that the benefits of chemotherapy aren't necessarily captured in the mortality data, and that while chemotherapy may not help nine out of ten patients, it may do wonders for the tenth. Still, considering its expense, its frequent lack of efficacy, and its toxicity-nearly 30 percent of the lung-cancer patients on one protocol stopped treatment rather than live with its brutal side effects-why is chemotherapy so widely administered?

The profit motive is certainly a factor. Doctors are, after all, human beings who respond to incentives. Oncologists are among the highest-paid doctors, their salaries increasing faster than any other specialists', and they typically derive more than half of their income from selling and administering chemotherapy drugs. Chemotherapy can also help oncologists inflate their survival-rate data. It may not seem all that valuable to give a late-stage victim of lung cancer an extra two months to live, but perhaps the patient was only expected to live four months anyway. On paper, this will look like an impressive feat: the doctor extended the patient's remaining life by 50 percent.

Tom Smith doesn't discount either of these reasons, but he provides two more.

It is tempting, he says, for oncologists to overstate-or perhaps over-believe in-the efficacy of chemotherapy. "If your slogan is 'We're winning the war on cancer,' that gets you press and charitable donations and money from Congress," he says. "If your slogan is 'We're still getting our b.u.t.ts kicked by cancer but not as bad as we used to,' that's a different sell. The reality is that for most people with solid tumors-brain, breast, prostate, lung-we aren't getting our b.u.t.ts kicked as badly, but we haven't made much progress."

There's also the fact that oncologists are, once again, human beings who have to tell other human beings they are dying and that, sadly, there isn't much to be done about it. "Doctors like me find it incredibly hard to tell people the very bad news," Smith says, "and how ineffective our medicines sometimes are."

If this task is so hard for doctors, surely it must also be hard for the politicians and insurance executives who subsidize the widespread use of chemotherapy. Despite the mountain of negative evidence, chemotherapy seems to afford cancer patients their last, best hope to nurse what Smith calls "the deep and abiding desire not to be dead." Still, it is easy to envision a point in the future, perhaps fifty years from now, when we collectively look back at the early twenty-first century's cutting-edge cancer treatments and say: We were giving our patients what?

The age-adjusted mortality rate for cancer is essentially unchanged over the past half-century, at about 200 deaths per 100,000 people. This is despite President Nixon's declaration of a "war on cancer" more than thirty years ago, which led to a dramatic increase in funding and public awareness.

Believe it or not, this flat mortality rate actually hides some good news. Over the same period, age-adjusted mortality from cardiovascular disease has plummeted, from nearly 600 people per 100,000 to well beneath 300. What does this mean?

Many people who in previous generations would have died from heart disease are now living long enough to die from cancer instead. Indeed, nearly 90 percent of newly diagnosed lung-cancer victims are fifty-five or older; the median age is seventy-one.

The flat cancer death rate obscures another hopeful trend. For people twenty and younger, mortality has fallen by more than 50 percent, while people aged twenty to forty have seen a decline of 20 percent. These gains are real and heartening-all the more so because the incidence of cancer among those age groups has been increasing. (The reasons for this increase aren't yet clear, but among the suspects are diet, behaviors, and environmental factors.)

With cancer killing fewer people under forty, fighting two wars must surely be driving the death toll higher for young people, no?

From 2002 to 2008, the United States was fighting b.l.o.o.d.y wars in Afghanistan and Iraq; among active military personnel, there were an average 1,643 fatalities per year. But over the same stretch of time in the early 1980s, with the United States fighting no major wars, there were more than 2,100 military deaths per year. How can this possibly be?

For one, the military used to be much larger: 2.1 million on active duty in 1988 versus 1.4 million in 2008. But even the rate of death in 2008 was lower than in certain peacetime years. Some of this improvement is likely due to better medical care. But a surprising fact is that the accidental death rate for soldiers in the early 1980s was higher than the death rate by hostile fire for every year the United States has been fighting in Afghanistan and Iraq. It seems that practicing to fight a war can be just about as dangerous as really fighting one.

And, to further put things in perspective, think about this: since 1982, some 42,000 active U.S. military personnel have been killed-roughly the same number of Americans who die in traffic accidents in a single year.

If someone smokes two packs of cigarettes a day for thirty years and dies of emphysema, at least you can say he brought it on himself and got to enjoy a lifetime of smoking.

There is no such consolation for the victim of a terrorist attack. Not only is your demise sudden and violent but you did nothing to earn it. You are collateral damage; the people who killed you neither knew nor cared a whit about your life, your accomplishments, your loved ones. Your death was a prop.

Terrorism is all the more frustrating because it is so hard to prevent, since terrorists have a virtually unlimited menu of methods and targets. Bombs on a train. An airplane crashed into a skysc.r.a.per. Anthrax sent through the mail. After an attack like 9/11 in the United States or 7/7 in London, a ma.s.sive amount of resources are inevitably deployed to shield the most precious targets, but there is a Sisyphean element to such a task. Rather than walling off every target a terrorist may attack, what you'd really like to do is figure out who the terrorists are before they strike and throw them in jail.

The good news is there aren't many terrorists. This is a natural conclusion if you consider the relative ease of carrying out a terrorist attack and the relative scarcity of such attacks. There has been a near absence of terrorism on U.S. soil since September 11; in the United Kingdom, terrorists are probably more prevalent but still exceedingly rare.

The bad news is the scarcity of terrorists makes them hard to find before they do damage. Anti-terror efforts are traditionally built around three activities: gathering human intelligence, which is difficult and dangerous; monitoring electronic "chatter," which can be like trying to sip from a fire hose; and following the international money trail-which, considering the trillions of dollars sloshing around the world's banks every day, is like trying to sift the entire beach for a few particular grains of sand. The nineteen men behind the September 11 attacks funded their entire operation with $303,671.62, or less than $16,000 per person.

Might there be a fourth tactic that could help find terrorists?

Ian Horsley* believes there may. He doesn't work in law enforcement, or in government or the military, nor does anything in his background or manner suggest he might be the least bit heroic. He grew up in the heart of England, the son of an electrical engineer, and is now well into middle age. He still lives happily far from the maddening thrum of London. While perfectly affable, he isn't outgoing or jolly by any measure; Horsley is, in his own words, "completely average and utterly forgettable."

Growing up, he thought he might like to be an accountant. But he left school when his girlfriend's father helped him get a job as a bank cashier. He took on new positions at the bank as they arose, none of them particularly interesting or profitable. One job, in computer programming, turned out to be a bit more intriguing because it gave him "a fundamental understanding of the underlying database that the bank operates on," he says.

Horsley proved to be diligent, a keen observer of human behavior, and a man who plainly knew right from wrong. Eventually he was asked to sniff out fraud among bank employees, and in time he graduated to consumer fraud, which was a far wider threat to the bank. U.K. banks lose about $1.5 billion annually to such fraud. In recent years, it had been facilitated by two forces: the rise of online banking and the fierce compet.i.tion among banks to snag new business.

For a time, money was so cheap and credit so easy that anyone with a pulse, regardless of employment or citizenship or creditworthiness, could walk into a British bank and walk out with a debit card. (In truth, even a pulse wasn't necessary: fraudsters were happy to use the ident.i.ties of dead and fictional people as well.) Horsley learned the customs of various subgroups. West African immigrants were master check washers, while Eastern Europeans were the best ident.i.ty thieves. Such fraudsters were relentless and creative: they would track down a bank's call center and linger outside until an employee exited, offering a bribe for customers' information.

Horsley built a team of data a.n.a.lysts and profilers who wrote computer programs that could crawl through the bank's database and detect fraudulent activity. The programmers were good. The fraudsters were also good, and nimble too, devising new scams as soon as old ones were compromised. These rapid mutations sharpened Horsley's ability to think like a fraudster. Even in his sleep, his mind cruised through billions upon billions of bank data points, seeking out patterns that might betray wrongdoing. His algorithms got tighter and tighter.

We had the good fortune to meet Ian Horsley at about this time and, jointly, we began to wonder: if his algorithms could sift through an endless stream of retail banking data and successfully detect fraudsters, might the same data be coaxed to identify other bad guys, like would-be terrorists?