by Tom Liu
To Err is (Still) Human
Last week, Consumer Reports released its new “surgery ratings”, encompassing 2,463 hospitals in all 50 states and the District of Columbia. The ratings measure the rate of hospital deaths and unexpected discharge delays for 27 common surgeries, including hip and knee replacements, back surgery, and angioplasty. Scrolling through the publicly available ratings list, I was struck by the number of dark shaded circles filling my screen—each one representing a hospital that performed poorly by their standards.
It’s been 14 years since the Institute of Medicine published its landmark “To Err Is Human” study, which estimated that a shocking 44,000-98,000 Americans die each year as a result of medical errors (0.2%-0.5% of hospitalizations). The study called for a 50% reduction in errors by 2004, asking, “Must we wait another decade to be safe in our health system?” It appears so; a 2010 NEJM study measured the rate of errors in ten North Carolina hospitals from 2002 to 2007. North Carolina was specifically chosen for having a “high level of engagement in efforts to improve patient safety.” Yet out of 2341 admission records reviewed, 588 harms were identified for a rate of 25.1 harms per 100 admissions. 14 involved harms that caused or contributed to a patient’s death, leading to a 0.6% hospitalization death rate due to medical error.
The dial hadn’t moved.
The lack of progress is especially puzzling given that we know what needs to be done. Numerous “simple” interventions have been documented to vastly reduce the rate of medical errors, exemplified best by Dr. Atul Gawande’s surgery checklist. Between 2007 and 2008, Dr. Gawande and team implemented their 19-step surgical checklist in eight hospitals around the world, to remarkable results: a 36% reduction in inpatient complications (from 11.0% to 7.0%) and a 47% reduction in surgical deaths (from 1.5% to 0.8%). A similar story could be told about Dr. Peter Pronovost’s 5-step checklist to reduce hospital-acquired infections, which, when implemented in 103 ICUs in Michigan (representing about two-thirds of hospitals in the state), reduced the median infection rate from 2.7 infections per 1000 catheter-days to zero.
So why haven’t we achieved more?
The Human Complexity of a “Simple” Solution
One key detail buried in the global surgery checklist study offers some clues: in the eight hospitals that “implemented” the checklist, the checklist steps were fully followed for only 58% of patients. Similarly, in an interview, Dr. Pronovost attributed the slow diffusion of uptake for his checklist in part to the fact that “some hospitals claim they use the checklist, despite having high or unknown infection rates.” Given that hospitals in both studies were fully aware that they were participating in a project that would receive national attention, the Hawthorne effect suggests that subsequent expansion to new hospitals may achieve less spectacular results.
In our race to promulgate these and other seemingly simple solutions, it’s important to keep in mind that quality improvement initiatives are not like controlled drugs. Implement the same 5-step checklist at two hospitals, and they may achieve remarkably different results depending on the engagement of the leaders and personnel implementing the intervention.
In 2009, the Joint Commission published a survey of U.S. emergency departments that revealed a marked difference between official “adoption” of National Patient Safety Goals and actual adherence to them. Two example goals are particularly illustrative:
- Goal 1b: Conducting regular time-outs prior to invasive procedures. 86% of EDs had “implemented” the policy, yet nurses in only 23% of them reported facing no barriers to implementation. The most frequently cited barriers were “immediate clinical needs of the patient, lack of buy-in by all team members, forgetting to do time-outs, time-outs not being a current expectation in the ED, and other patient priorities taking precedence.” (emphasis mine)
- Goal 2a: Read-back of verbal/telephone orders. 93% of EDs had “implemented” this policy, but nurses at only 55% of them reported facing no barriers to implementation. Frequently cited barriers included “the prescriber did not wait for a read-back of verbal/telephone orders” and “prescriber intimidation”.
It’s strikingly that many of the most commonly cited barriers aren’t financial or technological in nature, but distinctly human. Lack of buy-in. Weak expectations. A culture that hasn’t yet fully embraced the value of the intervention at hand. In his most recent New Yorker article, Dr. Gawande describes the process by which a 24-year-old nurse trainer, Sister Seema Yadav, gradually convinced a more senior 30-year-old nurse to embrace safe birthing practices. The older nurse was initially defensive about Sister Seema’s suggested improvements, pointing to reasons such as lack of time or lack of supplies. But by the fourth or fifth visit, Sister Seema had won her over with her ebullient personality and personal stories, convincing her to implement the changes long enough for the effects to began to manifest. And once they did, those tangible improvements became the most powerful motivator:
In the era of the iPhone, Facebook, and Twitter, we’ve become enamored of ideas that spread as effortlessly as ether. […] But technology and incentive programs are not enough. ‘Diffusion is essentially a social process through which people talking to people spread an innovation,’ wrote Everett Rogers, the great scholar of how new ideas are communicated and spread. Mass media can introduce a new idea to people. But, Rogers showed, people follow the lead of other people they know and trust when they decide whether to take it up. Every change requires effort, and the decision to make that effort is a social process.
Salience: The Other Human Motivator
If social interaction is the way by which proven interventions truly diffuse, then I would argue that salience—of both the urgency of the problem and the potential of the individual to fix it—are paramount for action to be initiated and sustained.
Earlier today, a colleague and mentor of mine alerted me to a terrifying statistic: 1.2 million people died prematurely in China in 2010 due to outdoor air pollution. These kinds of estimates are not new; as the article outlines, estimates of premature death in China have been published as early as 2007, when a World Bank study concluded that 350,000-400,000 people die prematurely in China every year due to outdoor air pollution. But it took this January’s “Aipocalypse” to galvanize China’s citizens into action: Online searches for the word “mask” jumped 5,300% in a month, and sales of masks jumped to over 100,000 each day in Beijing alone. In contrast, when melamine-tainted infant formula killed six infants in China in 2008, parents’ responses were swift and widespread, leading numerous countries to impose limits on formula bulk purchases, setting off formula shortages in Hong Kong, and even giving birth to a formula smuggling syndicate.
The difference is that while individual parents can directly trace their baby’s health back to the formula they feed them, figures such as the 1.2 million deaths from air pollution—or, 98,000 deaths from medical errors—are much harder to tangibly grasp. They seem distant, academic, and it’s unclear how much impact one’s efforts may be having on addressing them.
Figuring out how to make medical errors viscerally apparent for every member of the health care team will no doubt accelerate and deepen the adoption of tried-and-true solutions. Equally important is documenting and celebrating the improvements brought about by each individual’s efforts, no matter how small. This handbook chapter on quality improvement reviewed 50 studies and quality improvement projects for evidence of what improves the success of quality improvement initiatives: findings included ensuring communication on progress, celebrating small successes, building relationships with team members offline, and ensuring everyone felt that he/she was an integral part of the quality improvement effort. This is the “messy” human component not documented in the Methods sections of randomized controlled trials, but as much a determinant of the intervention outcomes as the intervention itself. If we can figure out how to leverage this human component in promulgating and implementing proven tools and programs, perhaps the next decade will look very different for patient safety than the past.