- Home
- Matthew Syed
Black Box Thinking Page 2
Black Box Thinking Read online
Page 2
But these statistics, while shocking, almost certainly underestimate the true scale of the problem. In 2013 a study published in the Journal of Patient Safety8 put the number of premature deaths associated with preventable harm at more than 400,000 per year. (Categories of avoidable harm include misdiagnosis, dispensing the wrong drugs, injuring the patient during surgery, operating on the wrong part of the body, improper transfusions, falls, burns, pressure ulcers, and postoperative complications.) Testifying to a Senate hearing in the summer of 2014, Peter J. Pronovost, MD, professor at the Johns Hopkins University School of Medicine and one of the most respected clinicians in the world, pointed out that this is the equivalent of two jumbo jets falling out of the sky every twenty-four hours.
“What these numbers say is that every day, a 747, two of them are crashing. Every two months, 9/11 is occurring,” he said. “We would not tolerate that degree of preventable harm in any other forum.”9 These figures place preventable medical error in hospitals as the third biggest killer in the United States—behind only heart disease and cancer.
And yet even these numbers are incomplete. They do not include fatalities caused in nursing homes or in outpatient settings, such as pharmacies, care centers, and private offices, where oversight is less rigorous. According to Joe Graedon, adjunct assistant professor in the Division of Pharmacy Practice and Experiential Education at the University of North Carolina, the full death toll due to avoidable error in American health care is more than half a million people per year.10
However, it is not just the number of deaths that should worry us; it is also the nonlethal harm caused by preventable error. In her testimony to the same Senate hearing, Joanne Disch, clinical professor at the University of Minnesota School of Nursing, referred to a woman from her neighborhood who “underwent a bilateral mastectomy for cancer only to find out shortly after surgery that there had been a mix-up in the biopsy reports and that she didn’t have cancer.”11
These kinds of errors are not fatal, but they can be devastating to victims and their families. The number of patients who endure serious complications is estimated to be ten times higher than the number of patients killed by medical error. As Disch put it: “We are not only dealing with 1,000 preventable deaths per day, but 1,000 preventable deaths and 10,000 preventable serious complications per day . . . It affects all of us.”12
In the UK the numbers are also alarming. A report by the National Audit Office in 2005 estimated that up to 34,000 people are killed per year due to human error.13 It put the overall number of patient incidents (fatal and nonfatal) at 974,000. A study into acute care in hospitals found that one in every ten patients is killed or injured as a consequence of medical error or institutional shortcomings. French health care put the number even higher, at 14 percent.
The problem is not a small group of crazy, homicidal, incompetent doctors going around causing havoc. Medical errors follow a normal bell-shaped distribution.14 They occur most often not when clinicians get bored or lazy or malign, but when they are going about their business with the diligence and concern you would expect from the medical profession.
Why, then, do so many mistakes happen? One of the problems is complexity. The World Health Organization lists 12,420 diseases and disorders, each of which requires different protocols.15 This complexity provides ample scope for mistakes in everything from diagnosis to treatment. Another problem is scarce resources. Doctors are often overworked and hospitals stretched; they frequently need more money. A third issue is that doctors may have to make quick decisions. With serious cases there is rarely sufficient time to consider all the alternative treatments. Sometimes procrastination is the biggest mistake of all, even if you end up with the “right” judgment at the end of it.
But there is also something deeper and more subtle at work, something that has little to do with resources, and everything to do with culture. It turns out that many of the errors committed in hospitals (and in other areas of life) have particular trajectories, subtle but predictable patterns: what accident investigators call “signatures.” With open reporting and honest evaluation, these errors could be spotted and reforms put in place to stop them from happening again, as happens in aviation. But, all too often, they aren’t.
It sounds simple, doesn’t it? Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Health care is just one strand in a long, rich story of evasion. Confronting this could not only transform health care, but business, politics, and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.
In this book we will examine how we respond to failure, as individuals, as businesses, as societies. How do we deal with it, and learn from it? How do we react when something has gone wrong, whether because of a slip, a lapse, an error of commission or omission, or a collective failure of the kind that caused the death of a healthy thirty-seven-year-old mother of two on a spring day in 2005?
All of us are aware, in our different ways, that we find it difficult to accept our own failures. Even in trivial things, like a friendly game of golf, we can become prickly when we have underperformed, and we are asked about it in the clubhouse afterward. When failure is related to something important in our lives—our job, our role as a parent, our wider status—it is taken to a different level altogether.
When our professionalism is threatened, we are liable to put up defenses. We don’t want to think of ourselves as incompetent or inept. We don’t want our credibility to be undermined in the eyes of our colleagues. For senior doctors, who have spent years in training and have reached the top of their profession, being open about mistakes can be almost traumatic.
Society, as a whole, has a deeply contradictory attitude to failure. Even as we find excuses for our own failings, we are quick to blame others who mess up. In the aftermath of the South Korean ferry disaster of 2014, the Korean prime minister accused the captain of “unforgivable, murderous acts” before any investigation had even taken place.16 She was responding to an almost frantic public demand for a culprit.
We have a deep instinct to find scapegoats. When one reads about the moments leading up to the death of Elaine Bromiley, it is easy to feel a spike of indignation. Perhaps even anger. Why didn’t they attempt a tracheotomy sooner? Why didn’t the nurse speak up? What were they thinking? Our empathy for the victim is, emotionally speaking, almost synonymous with our fury at those who caused her death.
But this has recursive effects, as we shall see. It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns cover-ups. It destroys the vital information we need in order to learn.
When we take a step back and think about failure more generally, the ironies escalate. Studies have shown that we are often so worried about failure that we create vague goals, so that nobody can point the finger when we don’t achieve them. We come up with face-saving excuses, even before we have attempted anything.
We cover up mistakes, not only to protect ourselves from others, but to protect us from ourselves. Experiments have demonstrated that we all have a sophisticated ability to delete failures from memory, like editors cutting gaffes from a film reel—as we’ll see. Far from learning from mistakes, we edit them out of the official autobiographies we all keep in our own heads.
This basic perspective—that failure is profoundly negative, something to be ashamed of in ourselves and judgmental about in others—has deep cultural and psychological roots. According to Sidney Dekker, a psychologist and systems expert at Griffith University,
Australia, the tendency to stigmatize errors is at least two and a half thousand years old.17
The purpose of this book is to offer a radically different perspective. It will argue that we need to redefine our relationship with failure, as individuals, as organizations, and as societies. This is the most important step on the road to a high-performance revolution: increasing the speed of development in human activity and transforming those areas that have been left behind. Only by redefining failure will we unleash progress, creativity, and resilience.
Before moving on, it is worth examining the idea of a “closed loop,” something that will recur often in the coming pages. We can get a handle on this idea by looking at the early history of medicine, during which pioneers such as Galen of Pergamon (second century AD) propagated treatments like bloodletting and the use of mercury as an elixir. These treatments were devised with the best of intentions, and in line with the best knowledge available at the time.18
But many were ineffective, and some highly damaging. Bloodletting, in particular, weakened patients when they were at their most vulnerable. The doctors didn’t know this for a simple but profound reason: they never subjected the treatment to a proper test—and so they never detected failure. If a patient recovered, the doctor would say: “Bloodletting cured him!” And if a patient died, the doctor would say: “He must have been very ill indeed because not even the wonder cure of bloodletting was able to save him!”
This is an archetypal closed loop. Bloodletting survived as a recognized treatment until the nineteenth century. According to Gerry Greenstone, who wrote a history of bloodletting, Dr. Benjamin Rush, who was working as late as 1810, was known to “remove extraordinary amounts of blood and often bled patients several times.” Doctors were effectively killing patients for the better part of 1,700 years not because they lacked intelligence or compassion, but because they did not recognize the flaws in their own procedures. If they had conducted a clinical trial (an idea we will return to),* they would have spotted the defects in bloodletting: and this would have set the stage for progress.
In the two hundred years since the first use of clinical trials, medicine has progressed from the ideas of Galen to the wonders of gene therapy. Medicine has a long way to go, and suffers from many defects, as we shall see, but a willingness to test ideas and to learn from mistakes has transformed its performance. The irony is that while medicine has evolved rapidly, via an “open loop,” health care (i.e., the institutional question of how treatments are delivered by real people working in complex systems) has not. (The terms “closed loop” and “open loop” have particular meanings in engineering and formal systems theory, which are different from the way in which they are used in this book. So, just to reemphasize, for our purposes a closed loop is where failure doesn’t lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon.)
Over the course of this book, we will discover closed loops throughout the modern world: in government departments, in businesses, in hospitals, and in our own lives. We will explore where they come from, the subtle ways they develop, and how otherwise smart people hold them tightly in place, going round and round in circles. We will also discover the techniques to identify them and break them down, freeing us from their grip and fostering knowledge.
Many textbooks offer subtle distinctions between different types of failure. They talk about mistakes, slips, iterations, suboptimal outcomes, errors of omission and commission, errors of procedure, statistical errors, failures of experimentation, serendipitous failures, and so on. A detailed taxonomy would take up a book on its own, so we will try to allow the nuances to emerge naturally as the book progresses.
It is probably worth stating here that nobody wants to fail. We all want to succeed, whether we are entrepreneurs, sportsmen, politicians, scientists, or parents. But at a collective level, at the level of systemic complexity, success can only happen when we admit our mistakes, learn from them, and create a climate where it is, in a certain sense, “safe” to fail.
And if the failure is a tragedy, such as the death of Elaine Bromiley, learning from failure takes on a moral urgency.
III
Martin Bromiley has short brown hair and a medium build. He speaks in clear matter-of-fact tones, although his voice breaks when he talks about the day he switched off Elaine’s life support machine.
“I asked the children if they wanted to say good-bye to Mummy,” he says when we meet on a clear spring morning in London. “They both said yes, so I drove them to the hospital and we stroked her hand, and said good-bye.”
He pauses to compose himself. “They were so small back then, so innocent, and I knew how much the loss was going to affect the rest of their lives. But most of all I felt for Elaine. She was such a wonderful mother. I grieved that she wouldn’t have the joy of seeing our two children growing up.”
As the days passed, Martin found himself wondering what had gone wrong. His wife had been a healthy, vital thirty-seven-year-old. She had her life in front of her. The doctors had told them it was a routine operation. How had she died?
Martin felt no anger. He knew that the doctors were experienced and had done their best. But he couldn’t stop wondering whether lessons might be learned.
When he approached the head of the Intensive Care Unit with a request for an investigation into Elaine’s death, however, he was instantly rebuffed. “That is not how things work in health care,” he was told. “We don’t do investigations. The only time we are obliged to do so is if someone sues.”
“He didn’t say it in an uncaring way, he was just being factual,” Martin tells me. “It is not something they have historically done in health care. I don’t think it was that they were worried about what the investigation might find. I think they just felt that Elaine’s death was one of those things. A one-off. They felt it was pointless to linger over it.”
In her seminal book After Harm, Nancy Berlinger, a health research scholar, conducted an investigation into the way doctors talk about errors. It proved to be very eye-opening. “Observing more senior physicians, students learn that their mentors and supervisors believe in, practice and reward the concealment of errors,” Berlinger writes. “They learn how to talk about unanticipated outcomes until a ‘mistake’ morphs into a ‘complication.’ Above all, they learn not to tell the patient anything.”
She also writes of “the depths of physicians’ resistance to disclosure and the lengths to which some will go to justify the habit of nondisclosure—it was only a technical error, things just happen, the patient won’t understand, the patient doesn’t need to know.”19
Just let that sink in for a moment. Doctors and nurses are not, in general, dishonest people. They do not go into health care to deceive people, or to mislead them; they go into the profession to heal people. Informal studies have shown that many clinicians would willingly trade a loss of income in order to improve outcomes for patients.
And yet, deep in the culture, there is a profound tendency for evasion. This is not the kind of all-out deceit practiced by con men. Doctors do not invent reasons for an accident to pull the wool over the eyes of their patients. Rather, they deploy a series of euphemisms—“technical error,” “complication,” “unanticipated outcome”—each of which contains an element of truth, but none of which provides the whole truth.
This is not just about avoiding litigation. Evidence suggests that medical negligence claims actually go down when doctors are open and honest with their patients. When the Veterans Affairs Medical Center in Lexington, Kentucky, introduced a “disclose and compensate” policy, its legal fees fell sharply.20 Around 40 percent of victims say that a full explanation and apology would have persuaded them not to take legal action.21 Other studies have revealed similar results.22
No, the problem is not just about the consequences of failure
, it is also about the attitude toward failure. In health care, competence is often equated with clinical perfection. Making mistakes is considered to demonstrate ineptness. The very idea of failing is threatening.
As the physician David Hilfiker put it in a seminal article in the New England Journal of Medicine: “The degree of perfection expected by patients is no doubt also a result of what we doctors have come to believe about ourselves, or better, have tried to convince ourselves about ourselves. This perfection is a grand illusion, of course, a game of mirrors that everyone plays.”23
Think of the language: surgeons work in a “theater.” This is the “stage” where they “perform.” How dare they fluff their lines? As James Reason, one of the world’s leading thinkers on system safety, put it: “After a very long, arduous and expensive education, you are expected to get it right. The consequence is that medical errors are marginalized and stigmatized. They are, by and large, equated to incompetence.”24
In these circumstances the euphemisms used by doctors to distract attention from mistakes (“technical error,” “complication,” “unanticipated outcome”) begin to make sense. For the individual doctor the threat to one’s ego, let alone reputation, is considerable. Think how often you have heard these euphemisms outside health care: by politicians when a policy has gone wrong; by a business leader when a strategy has failed; by friends and colleagues at work, for all sorts of reasons. You may have heard them coming from your own lips from time to time. I know I have heard them coming from mine.
The scale of evasion in health care is most fully revealed not just in the words used by clinicians, but in hard data. Epidemiological estimates of national rates of iatrogenic injury (injuries induced inadvertently by doctors, treatments, or diagnostic procedures) in the United States suggest that 44 to 66 serious injuries occur per 10,000 hospital visits. But in a study involving more than 200 American hospitals, only 1 percent reported their rates of iatrogenic injury as within that range. Half of the hospitals were reporting fewer than 5 cases of injury per 10,000 hospital visits. If the epidemiological estimates were even close to accurate, the majority of hospitals were involved in industrial levels of evasion.25