Black Box Thinking Read online

Page 12


  There is nothing wrong with making mistakes in forecasting, of course. The world is complex and there are many uncertainties, particularly in the economic arena. Indeed, there was something intellectually courageous about the group choosing to make their predictions so public in the first place. Certainly, the violation of their expectations handed them a gilt-edged opportunity to revise or enrich their theoretical assumptions. After all, that is what failure means.

  But how did the signatories actually react? In October 2014, Bloomberg, the media company, invited them to reflect on the content of their letter in the light of subsequent events.9 What is striking about the responses (nine of the signatories accepted the request for interview*) was not that these thinkers attempted to explain why the predictions had failed, or what they had learned; rather, it is that they didn’t think the prediction had failed at all.

  Indeed, many of them thought they had got their analysis exactly right.

  David Malpass, former deputy assistant Treasury secretary, said: “The letter was correct as stated.”

  John Taylor, professor of economics at Stanford University, said: “The letter mentioned several things—the risk of inflation, employment, it would destroy financial markets, complicate the Fed’s effort to normalize monetary policy—and all have happened.”

  Jim Grant, publisher of Grant’s Interest Rate Observer, said: “People say, you guys are all wrong because you predicted inflation and it hasn’t happened. I think there’s plenty of inflation—not at the checkout counter, necessarily, but on Wall Street.”

  It was almost as if they were looking at a different economy.

  Others argued that the prediction may not have materialized yet, but it soon would. Douglas Holtz-Eakin, former director of the Congressional Budget Office, said: “They are going to generate an uptick in core inflation. They are going to go above 2 percent. I don’t know when, but they will.”

  This last response is certainly true in the sense that inflation will rise, perhaps sharply, above its recent historic lows. But it is also reminiscent of the fan of Brentford Football Club who predicted at the beginning of the 2012 to 2013 season that his team would win the FA Cup. When they were knocked out by Chelsea, he was asked what had gone wrong with his prediction. He said: “I said they would win the FA Cup, but I didn’t say when.”

  This example is yet another illustration of the reach of cognitive dissonance. Dissonance is not just about Tony Blair, or doctors, or lawyers, or members of religious cults, it is also about world-famous business leaders, historians, and economists. Ultimately, it concerns how our culture’s stigmatizing attitude toward error undermines our capacity to see evidence in a clear-eyed way. It is about big decisions and small judgments: indeed, anything that threatens one’s self-esteem.

  A quick personal example. When I was in the process of writing this chapter, I joined a gym a few miles from where I live. It was an expensive membership and my wife warned that I wouldn’t use it because of the long journey. She pointed out that a less-expensive gym next door to our house would be a much better bet. She worried that the travel time would eat into the day. I disagreed.

  Day after day at the end of work I would drive over to the gym. The journey was increasingly time-consuming. Sometimes it took more than thirty minutes. I found myself rushing there and back while my wife enjoyed the proximity of the gym next door. The tougher the journey, the more I kept traveling over. It took me a year to realize that all these constant trips were attempts at justifying my original decision. I didn’t want to admit that it was a mistake to join in the first place.

  My wife, who read an early draft of this chapter, smiled after one such trip. “Cognitive dissonance,” she suggested. And she was right. Twelve months after paying an expensive membership fee, I finally joined the gym next door. Had I admitted my mistake sooner, I would have saved twelve months of frustration. But my ego just wouldn’t let me. It was too difficult to admit that I had been wrong all along—and that I had wasted a lot of money.

  This may sound like a trivial example, but it reveals the scope of cognitive dissonance. Think back to the various examples touched upon so far in the book, which involved decisions of far greater magnitude—and thus a bigger threat to self-esteem. An accident in an operating room became “one of those things”; an exonerating DNA test pointed to an “unindicted co-ejaculator”; the failure of an apocalyptic prophecy proved that “God has been appeased by our actions.”

  For the signatories to the open letter to Bernanke, the same analysis applies. The failure of an economic prediction showed not that they were mistaken, but that they were right all along. If inflation had soared, they would doubtless have taken this as a vindication. And yet they also felt entitled to claim success when inflation stayed low, just as Blair claimed vindication for his strategy in Iraq when events flatly contradicted his initial expectations. Heads I win; tails I don’t lose.

  It is probably fair to say that economics, as a subject, has a particular problem with its attitude to failure. It is not just the signatories to the letter, but the wider culture. As an economics student in the early 1990s I observed how many of us split into rival schools, such as Keynesians or Monetarists, at an early stage of the course. The decision to join one group or another was often based on the flimsiest of pretexts, but it had remarkably long-term consequences. Very few economists alter their ideological stance. They stick to it for life.

  A poll (albeit a straw one) of economists revealed that fewer than 10 percent change “schools” during their careers, or “significantly adapt” their theoretical assumptions.* Professor Sir Terry Burns, a former economic adviser to Margaret Thatcher (who later became chairman of Santander UK), told me: “It is roughly as common as Muslims converting to Christianity or vice versa.”

  This is surely a warning sign that instead of learning from data, some economists are spinning it. It hints at the suspicion that the intellectual energy of some of the world’s most formidable thinkers is directed, not at creating new, richer, more explanatory theories, but at coming up with ever-more-tortuous rationalizations as to why they were right all along.

  And this takes us back to perhaps the most paradoxical aspect of cognitive dissonance. It is precisely those thinkers who are most renowned, who are famous for their brilliant minds, who have the most to lose from mistakes. And that is why it is often the most influential people, those who ought to be in the best position to help the world learn from new evidence, who have the greatest incentive to reframe it. And these are also the kinds of people (or institutions) who often have the capacity to employ expensive PR firms to bolster their post hoc justifications. They have the financial means, in addition to a powerful subconscious urge, to bridge the gap between beliefs and evidence, not by learning, but by spinning. It is the equivalent of a golfer hitting the ball out of bounds and then hiring a slick PR company to convince the world that it had nothing to do with him, it was a sudden gust of wind!

  Perhaps this phenomenon was most vividly revealed in a celebrated study by Philip Tetlock, a psychologist from the University of Pennsylvania. In 1985 Tetlock invited 284 experts to assign probabilities that particular, well-defined events would occur in the not too distant future.10 All were acknowledged leaders in their fields, with more than half holding PhDs. Hypothetical events included such possibilities as like “Would Gorbachev be ousted in a coup?” and “Would there be a nonviolent end to apartheid in South Africa?” All told, he gathered thousands of predictions.

  A few years later Tetlock compared the predictions with what actually happened. He found that the predictions of experts were somewhat better than those of a group of undergraduates, but not by much. This is not surprising. The world is complex. Even for well-informed experts, it is not easy to say what will happen when there are lots of variables interacting in dynamic ways. As Tetlock put it: “We reach the point of diminishing marginal predictive returns for
knowledge disconcertingly quickly.”

  But perhaps the most striking finding of all was that the celebrated experts, the kinds of people who tour television studios and go on book tours, were the worst of all. As Tetlock put it: “Ironically, the more famous the expert, the less accurate his or her predictions tended to be.”

  Why is this? Cognitive dissonance gives us the answer. It is those who are the most publicly associated with their predictions, whose livelihoods and egos are bound up with their expertise, who are most likely to reframe their mistakes—and who are thus the least likely to learn from them.

  These findings have huge implications not just for economics, health care, and the law, but for business, too. After all, you might suppose that the higher up you go in a company, the less you will see the effects of cognitive dissonance. Aren’t the people who get to the top of big companies supposed to be rational, forensic, and clear-sighted? Isn’t that supposed to be their defining characteristic?

  In fact, the opposite is the case. In his seminal book, Why Smart Executives Fail: And What You Can Learn from Their Mistakes, Sydney Finkelstein, a management professor at Dartmouth College, investigated major failures at more than fifty corporate institutions.11 He found that error-denial increases as you go up the pecking order.

  Ironically enough, the higher people are in the management hierarchy, the more they tend to supplement their perfectionism with blanket excuses, with CEOs usually being the worst of all. For example, in one organization we studied, the CEO spent the entire forty-five-minute interview explaining all the reasons why others were to blame for the calamity that hit his company. Regulators, customers, the government, and even other executives within the firm—all were responsible. No mention was made, however, of personal culpability.

  The reason should by now be obvious. It is those at the top of business who are responsible for strategy and therefore have the most to lose if things go wrong. They are far more likely to cling to the idea that the strategy is wise, even as it is falling apart, and to reframe any evidence that says otherwise. Blinded by dissonance, they are also the least likely to learn the lessons.

  IV

  A common misperception of the theory of cognitive dissonance is that it is about external incentives. People have a lot to lose if they get their judgments wrong; doesn’t it therefore make sense that they would want to reframe them? The idea here is that the learning advantage of adapting to a mistake is outweighed by the reputational disadvantage of admitting to it.

  But this perspective does not encompass the full influence of cognitive dissonance. The problem is not just the external incentive structure, it is the internal one. It is the sheer difficulty that we have in admitting our mistakes even when we are incentivized to do so.

  To see this most clearly, consider the so-called disposition effect, a well-studied phenomenon in the field of behavioral finance. Say you have a portfolio of shares, some of which have lost money, and some of which have gained. Which are you likely to sell? And which are you likely to keep?

  A rational person should keep those shares most likely to appreciate in the future while selling those likely to depreciate. Indeed, this is what you must do if you are attempting to maximize your financial return. The stock market rewards those who buy low and sell high.

  But we are actually more likely to keep the shares that have lost money, regardless of their future prospects. Why? Because we hate to crystallize a loss. The moment a losing stock is sold, a paper loss becomes a real loss. It is unambiguous evidence that the decision to buy that stock in the first place was a mistake. This is why people hold on to losing stocks far too long, desperately hoping they will rebound.

  But when it comes to winning stocks, everything changes. Suddenly there is a subconscious desire to lock in the gain. After all, when you sell a winning stock you have bona fide proof that your initial judgment was right. It is a vindication. This is why there is a bias in selling winning stocks, even when they might rise in the future, thus robbing you of all that additional gain.

  A study by Terrance Odean, professor of finance at UC Berkeley, found that the winning stocks investors sold outperformed the losing stocks they didn’t sell by 3.4 percent. In other words, people were holding on to losing stocks too long because they couldn’t bring themselves to admit they had made a mistake. Even professional stock pickers—supposedly ultra-rational people who operate according to cold, hard logic—are susceptible: they tend to hold losing stocks around 25 percent longer than winning stocks.12

  But avoiding failure in the short term has an inevitable outcome: we lose bigger in the longer term. This is, in many ways, a perfect metaphor for error-denial in the world today: the external incentives—even when they reward a clear-eyed analysis of failure—are often overwhelmed by the internal urge to protect self-esteem. We spin the evidence even when it costs us.

  Confirmation bias is another of the psychological quirks associated with cognitive dissonance. The best way to see its effects is to consider the following sequence of numbers: 2, 4, 6. Suppose that you have to discover the underlying pattern in this sequence. Suppose, further, that you are given an opportunity to propose alternative sets of three numbers to explore the possibilities.

  Most people playing this game come up with a hypothesis pretty quickly. They guess, for example, that the underlying pattern is “even numbers ascending sequentially.” There are other possibilities, of course. The pattern might just be “even numbers.” Or “the third number is the sum of the first two.” And so on.

  The key question is, How do you establish whether your initial hunch is right? Most people simply try to confirm their hypothesis. So, if they think the pattern is “even numbers ascending sequentially,” they will propose “10, 12, 14” and when this is confirmed, they will propose “100, 102, 104.” After three such tests most people are pretty certain that they have found the answer.

  And yet they may be wrong. If the pattern is actually “any ascending numbers,” their guesses will not help them. Had they used a different strategy, on the other hand, attempting to falsify their hypothesis rather than confirm it, they would have discovered this far quicker. If they had, say, proposed 4, 6, 11 (fits the pattern), they would have found that their initial hunch was wrong. If they had followed up with, say, 5, 2, 1, (which doesn’t fit), they would now be getting pretty warm.

  As Paul Schoemaker, research director of the Mack Institute for Innovation Management at the Wharton School of the University of Pennsylvania, puts it:

  The pattern is rarely uncovered unless subjects are willing to make mistakes—that is, to test numbers that violate their belief. Instead most people get stuck in a narrow and wrong hypothesis, as often happens in real life, such that their only way out is to make a mistake that turns out not to be a mistake after all. Sometimes, committing errors is not just the fastest way to the correct answer; it’s the only way. College students presented with this experiment were allowed to test as many sets of three numbers as they wished. Fewer than 10 percent discovered the pattern.13

  This is confirmation bias in action, and it is eerily reminiscent of early medicine (where doctors interpreted any outcome in their patients as an affirmation of bloodletting). It provides another reason why the scientific mindset, with a healthy emphasis on falsification, is so vital. It acts as a corrective to our tendency to spend our time confirming what we think we already know, rather than seeking to discover what we don’t know.

  As the philosopher Karl Popper wrote: “For if we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain . . . overwhelming evidence in favor of a theory which, if approached critically, would have been refuted.”14

  V

  For one final example, let us examine an incident that neatly draws together
the various insights so far. It involved Peter Pronovost, the doctor we met in chapter 3 who cut central line infections from 11 percent to 0 at Johns Hopkins University Hospital by introducing an intensive care checklist.

  Early in his career, Pronovost, an anesthetist by training, was in the operating theater assisting with surgery on a patient suffering from a recurrent hernia.15 Ninety minutes into the operation the patient started wheezing, her face reddened, and her blood pressure plummeted. Pronovost strongly suspected that she had a latex allergy and that the surgical gloves of the surgeon could be at fault.

  He provided a dose of epinephrine, the recommended drug, and her symptoms dissipated. He then advised the surgeon to change to an alternative pair of gloves, which were stored nearby. But the surgeon disagreed. “You’re wrong,” he said. “This can’t be a latex allergy. We have been operating for an hour and a half and the patient didn’t experience a reaction to latex during any of her previous procedures.”

  The stakes were now set. The surgeon had expressed his judgment. He was the boss, the captain in charge, the man at the pinnacle of the hierarchy. Any new evidence or argument from this point on was likely to be interpreted not as an opportunity to do what was right for the patient, but as a challenge to his competence and authority. In short, cognitive dissonance was now in play.

  Pronovost, however, didn’t drop his concern. He had a deep knowledge of allergies and tried to explain his reasoning. “Latex allergies often develop after a patient, like this one, has had multiple surgeries and they can start anytime during the case,” he said. “You just got into her abdomen and the latex only recently came in contact with her blood, which is why we didn’t see the reaction before.”