Black Box Thinking Read online

Page 5


  In America he was hugely relieved when he found out that he would be able to pursue his love of math. He was ultimately offered a post with a team with the seemingly banal name of the Applied Mathematics Panel. He found himself working out of a fourth-floor apartment a few streets away from the center of Harlem. It turned out to be a turning point in the war.12

  The panel was a group of brilliant mathematicians. Working on behalf of the military, they were given the job of analyzing a whole range of issues, such as the most effective pattern of torpedo launching and the aerodynamic efficiency of missiles. As the author David McRaney put it: “People walking by the apartment at the time had no idea that four stories above them some of the most important work in applied mathematics was tilting the scales of a global conflict.”13

  Much of the work was highly confidential and the papers produced by the panel remained classified for decades. But over recent years researchers have begun to piece together the contribution of these “soldier mathematicians” and discovered that it was vital to the outcome of the war. Wald’s involvement, which only came to light years later, was perhaps the most astonishing of all.

  He was asked by the military to help them with a crucial issue. Bomber aircraft in Europe were being asked to take huge risks. For certain periods of the conflict, the probability of a pilot surviving a tour of duty was little better than fifty-fifty. Kevin Wilson, the military historian, described these remarkable and brave men as “ghosts already.”14

  The wartime leaders realized that they needed to reinforce the planes with armor. This would help protect them from gunfire, from the ground and the air. The problem is that they could not armor the entire surface area because the planes would become too heavy to fly, and lose maneuverability. Wald was brought in to prioritize the areas that needed armor most.

  He had lots of data to work from. To their credit, the air force had taken the trouble to examine returning aircraft to assess the extent of the damage, and how they might respond to it. This was black-box-style behavior. They were examining the data from adverse events in order to work out how to improve the safety of the aircraft.

  To the relief of the air-force command, the pattern seemed clear. Many of the planes were riddled with gunfire all over the wings and fuselage. But they were not being hit in the cockpit or tail. The longer the incident reporting continued, the clearer the pattern became.

  You can see the pattern in the diagram below.

  The military command came up with what seemed like the perfect plan: they would place armor on the areas of the plane where there were holes. This is where the bullets were impacting and, therefore, where the planes needed additional protection. It was plain common sense. To those in positions of military leadership, it was the best way to shield their brave airmen from enemy fire.

  But Wald disagreed. He realized that the chiefs had neglected to take into account some key data. They were only considering the planes that had returned. They were not taking into account the planes that had not returned (i.e., the planes that had been shot down). The observable bullet holes suggested that the area around the cockpit and tail didn’t need reinforcing because it was never hit. In fact, the planes that were hit in these places were crashing because this is where they were most vulnerable.

  In effect, the holes in the returning aircraft represented areas where a bomber could take damage and still return home safely. They had survived precisely because they had not been hit in the cockpit and tail. The pattern of holes, far from indicating where the armor needed to be added to the aircraft, was actually revealing the areas where it did not.

  The insight turned out to be of profound importance, not just to bomber command, but to the entire war effort.

  This is a powerful example because it reveals a couple of key things. The first is that you have to take into account all the data, including the data you cannot immediately see, if you are going to learn from adverse incidents. But it also emphasizes that learning from failure is not always easy, even in conceptual terms, let alone emotional terms. It takes careful thought and a willingness to pierce through the surface assumptions. Often, it means looking beyond the obvious data to glimpse the underlying lessons. This is not just true of learning in aviation, but in business, politics, and beyond.

  As Amy Edmondson of Harvard Business School has put it: “Learning from failure is anything but straightforward. The attitudes and activities required to effectively detect and analyze failures are in short supply in most companies, and the need for context-specific learning strategies is underappreciated. Organizations need new and better ways to go beyond lessons that are superficial.”15

  Wald’s analysis of bullet-riddled aircraft in World War II saved the lives of dozens of brave airmen. His seminal paper for the military was not declassified until July 1980, but can be found today via a simple search on Google. It is entitled: “A Method of Estimating Plane Vulnerability Based on Damage of Survivors.”16

  It wasn’t until after the war that Wald learned of the murder of eight of his nine family members at the hands of the Nazis. According to those who knew him best, the pain of the loss never left him. One of his closest friends wrote: “Even this cruel blow failed to make him embittered, although a certain sadness could be felt to be with him for the rest of his life.”17

  In the late 1940s, he managed to organize a passage to the United States for his older brother, Hermann, the sole family member to survive the Holocaust. His friends would testify that he took “great comfort” in the company of his brother, as well as in continuing work in mathematics at Columbia University.

  One hopes that this remarkable and gentle man also took comfort from the fact that his analytical insights played a crucial role in defeating the evil ideology that murdered his loved ones.

  He was a black box thinker par excellence.

  Chapter 3

  The Paradox of Success

  I

  At 3:25 p.m. on January 15, 2009, US Airways Flight 1549 was given clearance to take off from runway 4 of New York’s LaGuardia Airport.

  It was a clear afternoon and up in the cockpit Captain Chesley Sullenberger and First Officer Jeffrey Skiles ran through the checklists. They were looking forward to the trip. What neither of them realized is that they were about to embark on one of the most celebrated commercial flights of modern times.1

  Less than two minutes after takeoff, a flock of Canada geese suddenly loomed into view to the right of the plane. The speed of approach was so fast that the pilots had no chance to take evasive action. Two birds flew into the right engine and at least one more into the left.

  After a series of loud thuds, the plane seemed to come to a halt, followed by deathly silence. The engines had lost thrust. The pilots felt their pulses racing, their perception narrowing: the classic responses to danger. They were now 3,000 feet above New York in a 70-ton Airbus A320 with no power.

  They had to make a series of split-second decisions. They were offered a return to LaGuardia, then a rerouting to Teterboro, an airport in the New Jersey Meadowlands, nineteen miles away. Both options were rejected. The plane would not glide that far. It was dropping too fast.

  At 3:29 p.m. Sullenberger uttered the words that would create headlines around the world: “We’re going to be in the Hudson.”

  • • •

  In the opening part of this book we have focused on failure in two safety-critical areas: aviation and health care. We have looked at responses, attitudes, and investigations into failure. Now we will have a brief look at success, and our responses to that. By shining a light on how we get things right we will discover a little more about why we get things wrong.

  Sullenberger ultimately landed the plane, all 70 tons of it, on the Hudson River. It was a brilliantly judged maneuver. The captain was diligent in the aftermath, too. He walked through the cabin twice to insure that all the passengers had exited onto the w
ings, lying inches above the surface of the river, before leaving his aircraft. There were no fatal injuries.

  His coolness mesmerized America. The then fifty-seven-year-old received a phone call from President-elect Obama. He was invited to the presidential inauguration. Time magazine listed him second in its section of Heroes & Icons in its TIME 100 of 2009.2 Academics hailed a new kind of authentic heroism amid a superficial celebrity culture. To the public it was an episode of sublime individualism; one man’s skill and calmness under pressure saving more than a hundred lives.

  But aviation experts took a different view. They glimpsed a bigger picture. They cited not just Sullenberger’s individual brilliance but also the system in which he operates. Some made reference to Crew Resource Management. The division of responsibilities between Sullenberger and Skiles occurred seamlessly. Seconds after the bird strike, Sullenberger took control of the aircraft while Skiles checked the quick-reference handbook.

  Channels of communication were open until the very last seconds of the flight. Skiles called out airspeeds and altitudes to provide his captain as much situational awareness as possible as the aircraft dropped. With just a few seconds to go until impact they were still talking. “Got any ideas?” Sullenberger asked. “Actually not,” replied Skiles.

  Other safety experts talked about the fly-by-wire technology (the sophisticated autopilot systems that are active in all Airbus planes), which corrected the tilt of the plane inches from contact with the water. Still others credited checklists and clever ergonomic design, both of which assisted the crew as the pressure intensified after the bird strike.

  This was a fascinating discussion, which largely took place away from the watching public. But even this debate obscured the deepest truth of all. Checklists originally emerged from a series of crashes in the 1930s. Ergonomic cockpit design was born out of the disastrous series of accidents involving B-17s. Crew Resource Management emerged from the wreckage of United Airlines 173.

  This is the paradox of success: it is built upon failure.

  It is also instructive to examine the different public responses to McBroom and Sullenberger. McBroom, we should remember, was a brilliant pilot. His capacity to keep his nerve as the DC8 careered down, flying between trees, avoiding an apartment block, finding the minimum impact force for a 90-ton aircraft hitting solid ground, probably saved the lives of a hundred people.

  After the accident, however, he was shunned. Although the prevailing attitude within aviation was largely driven by a desire to learn from the mistake, wider society rushed to stigmatize the man who had been at the controls when the mistake was made. People were outraged at how a trained pilot had crashed a perfectly adequate plane because he had allowed it to run out of fuel.

  He retired from flying shortly afterward. He and his wife separated within three years. At a reunion eight years before his death in 2004, he was described by Aimee Conner, a survivor of United Airlines 173, as “a very broken man . . . He was devastated. He lost his license. He lost his family. The rest of his life was just shattered.”

  His tragedy, if you can call it that, was to fly at a time when the limitations of human attention and effective communication were not fully understood. He flew United Airlines 173 with a latent error in the system: an error waiting to happen, just like Dr. Edwards and Dr. Anderton, two outstanding doctors, in an operating theater near North Marston more than twenty-five years later.

  The irony is that Sullenberger, feted by presidents, might have made precisely the same mistake under those circumstances. The fact that he didn’t, and emerged a hero, was for a simple but profound reason: the industry in which he operates had learned the lessons. It is both apt and revealing that Sullenberger, a modest and self-evidently decent man, has made exactly this point. In a television interview months after the miracle landing on the Hudson, he offered this beautiful gem of wisdom:

  Everything we know in aviation, every rule in the rule book, every procedure we have, we know because someone somewhere died . . . We have purchased at great cost, lessons literally bought with blood that we have to preserve as institutional knowledge and pass on to succeeding generations. We cannot have the moral failure of forgetting these lessons and have to relearn them.

  II

  These words of Sullenberger are worth reflecting upon because they offer the chance to radically reimagine failure. The idea that the successful safety record in aviation has emerged from the rubble of real-world accidents is vivid, paradoxical, and profound. It is also revelatory. For if one looks closely enough it is an insight echoed across almost every branch of human endeavor.

  Take science, a discipline where learning from failure is part of the method. This is a point that has been made by the philosopher Karl Popper, who suggested that science progresses through its vigilant response to its own mistakes. By making predictions that can be tested, a scientific theory is inherently vulnerable. This may seem like a weakness, but Popper realized that it is an incalculable strength.

  “The history of science, like the history of all human ideas, is a history of . . . error,” Popper wrote. “But science is one of the very few human activities—perhaps the only one—in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we learn from our mistakes and why we can speak clearly and sensibly about making progress.”3

  In this context, consider the experiment (which is probably apocryphal) conducted by Galileo in sixteenth-century Italy. For many centuries the physics of Aristotle had dominated the world, a bit like the ideas of Galen dominating medicine. People had faith in the Greek thinker and, to a certain extent, it was considered impertinent to challenge him. Aristotle argued, among other things, that heavy objects fall faster than lighter ones, in direct proportion to weight.

  But was he right? Galileo conducted a test. He climbed the Leaning Tower of Pisa and dropped two balls of different masses. He found that the two objects fell with the same degree of acceleration and, in that moment, revealed that Aristotle’s theory was flawed. To use the terminology of Popper, he had “falsified” Aristotle’s hypothesis.

  This was a failure for Aristotle and a painful blow to his followers, many of whom were outraged by the experiment. But it was a profound victory for science. For if Aristotle was wrong, scientists were handed the impetus to figure out why and come up with new theories that, in turn, could be subjected to future falsification. This is, at least in part, how science progresses.*

  The same idea can be seen in relation to Einstein’s theory of relativity. In 1919 a British scientist named Arthur Eddington traveled to Africa to test one of relativity’s most novel claims: that light is attracted to heavy bodies. During an eclipse he took photographs of a distant star to see if he could detect the influence of gravity on the light rays coming toward Earth. Eddington’s experiment corroborated the theory.4 But the key point is that it might not have. Relativity was vulnerable to experimental falsification. It remains so to this day.5

  Compare this openness to failure with a pseudoscience like, say, astrology. Here, the predictions are hopelessly vague. On the day these words were written I looked at Horoscope.com to see its prediction for Libra. “Big changes are brewing at home or work,” it said. This may seem like a testable assertion, but pretty much anything that happens in the life of anybody, Libra or otherwise, fits the prediction. We all have changes “brewing” at home or work. This gives astrology a seductive strength: it is never “wrong.” But the price it pays for immunity from failure is high indeed: it cannot learn. Astrology has not changed in any meaningful way for over two centuries.

  Or take the theory, popular in the nineteenth century, that the world was created in 4004 BC. This seemed to have been disproved by the discovery of fossils, as well as by the later evidence of carbon dating. The new data pointed to the almost indisputable fact that the universe is substantially more than six thousa
nd years old.

  But in the nineteenth century a British naturalist named Philip Henry Gosse published a book called Omphalos in which he attempted to defend the creationist theory. His argument was nothing if not inventive. He asserted that the world had indeed been created in 4004 BC, but that God had created lots of apparent fossils at the same time so as to make the world look older than it actually is. He also argued that Adam had been given a navel by God in order to give him the appearance of human ancestry when he was really created out of mud (the title of his book Omphalos is “navel” in Greek).6

  In one way, Gosse had defended the theory of creationism in 4004 BC. His post hoc maneuver meant that the facts once again tallied with the theory. But he had done something else, too. He had made the theory invulnerable to failure. No amount of evidence, no amount of data, no amount of discovery could refute Gosse’s position. Any new information suggesting that the world was older than 4004 BC would simply be held up as further evidence that God had played a trick on the world. The theory was confirmed, come what may. But this also meant that it could never adapt to meet the challenge of new evidence.

  The same story can be told about the psychotherapeutic theories of Alfred Adler. These were very much in vogue in the 1920s and still have a lingering influence today. The central idea is that of the “inferiority complex”: the notion that behavior emerges from a desire to prove oneself.

  In 1919, Karl Popper met Adler personally to talk about a case that didn’t seem to fit his theories at all. The specifics of the case are less important than Adler’s response. Popper wrote:

  He [Adler] found no difficulty in analyzing in terms of his theory of inferiority feelings, although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. “Because of my thousand-fold experience,” he replied; whereupon I could not help saying: “And with this new case, I suppose, your experience has become thousand-and-one-fold.”7