The purpose of my last post was to present the argument that the mental health findings in Oregon are of very real clinical consequence: empirical research suggests a causal link between depression and illness, physical limitations, and mortality. No one has disputed evidence on that front. However, I failed to address something critical: we don’t have a clear picture of the underlying mechanism by which Medicaid improved depression rates in the Oregon population. That’s a very fair—and nuanced—point. It deserves its own post. Say, this one.
Obligatory recap for newbies; everyone else skip ahead. Two weeks ago, leading economists released the results of a two-year study where individuals were randomized onto Medicaid in Oregon through a lottery, marking the first time we’ve been able to compare Medicaid and uninsurance in the context of a controlled trial. Due to limitations (time, sample size, magnitude of effects) the study lacked the statistical power necessary to draw conclusions on physical health outcomes. However, depression rates markedly declined among those who received Medicaid coverage through the lottery. The change—a relative decrease of 30% compared to the uninsured group—was “significant” in both technical and colloquial terms. The study suggests that Medicaid enrollment plays a causal role, but available evidence does a poor job of addressing why.
Maybe it’s “all in their heads.” That’s still a real effect. It’s possible that knowing one has insurance does a lot of heavy lifting in the mental health department. Such a phenomenon has been described as a “placebo effect,” by some (I disagree with this characterization on purely semantic grounds, but that’s unimportant). The argument makes sense when it’s tied into the study’s findings that Medicaid substantially reduces catastrophic health expenditures, which is what happens when insurance functions as insurance. Medicaid reduces financial strain and uncertainty, which in turn likely reduces stress—cortisol is a nasty bugger—leading to an improved sense of well-being. That it’s psychological doesn’t make the effect any less “real”. But it creates space for a reasonable debate: is there a more cost-effective means to the same end? Catastrophic coverage, perhaps? More on that toward the end of the post.
Related criticisms point to the authors’ report that a month after enrollment (before substantive care utilization), patient surveys found “evidence of an improvement in self-reported health of about two-thirds the magnitude of our main survey estimates from more than a year later” (p. 1061, gated). A caveat: these findings didn’t include the depression screen. I don’t mean that changes in the depression screen weren’t statistically significant; the screen actually wasn’t part of the one-month survey instrument.
Still, let’s say if it had been included, we would have seen a similar pattern. Should we then dismiss the depression findings as some artifact of a “winner’s high”? The authors—as is mentioned in almost every commentary on this study—are some of the best in the field. This quirk did not escape their notice; here’s what they had to say on the matter (emphasis mine):
[T]he event of winning (or losing) the lottery may have direct effects on the outcomes we study, although it seems unlikely to us that any such effects both exist and persist a year after the lottery (p. 1081) … It is not clear that the immediate effects are directly comparable to those from one year later. Some of the immediate improvements may reflect “winning” effects that are less likely to be picked up in the estimates one year later. (p. 1099)
So, yes, the authors accept that “winning the lottery” might have influenced self-report measures in the month following randomization. But they’re skeptical that winners would stay that jazzed for a year—certainly not two—unless something else was at play. And yet, the results on depression and mental well-being stayed fairly consistent across the study’s duration, so something else must have contributed to the sustained improvements (the financial security discussed above being among the contenders).
Increased medication use probably played some role. Positive depression screens declined by 9.2% and depression medication use increased by 5.5%. It’s true that this latter change was not statistically significant (though it represents a relative increase of roughly one-third), but it was actually pretty close, with a p-value of 0.07. This is much nearer to the value needed to achieve significance (0.05) than we observed with the physical health outcomes. Your views on how to interpret that will vary based on your statistical philosophy—you have one of those, right?—but I figure it’s a tidbit worth offering.
Also, the Oregon Medicaid plan has mental health coverage that extends beyond pharmaceuticals. The plan’s covered services include evaluations and consultations, therapy, case management, medication management, hospitalization, and emergency services. That said, we have no idea whether this contributed to improvement in depression screening results. The problem is that information about outpatient care was obtained through surveys, not medical records, and we don’t know how much of that, if any, was related to mental health (ie: cognitive behavioral therapy). The survey asked subjects how many office visits they’d had over the past year (with a physician or other health care professional), but didn’t explore the nature of those visits.
There’s evidence to suggest that improvements in well-being cannot be attributed to other major social services. I alluded to this at the end of my last post: social policy is believed to have a huge impact on downstream health outcomes; for a better discussion of that than I can offer in a paragraph, check out this NYT op-ed. One concern was that subjects randomized to Medicaid might be more attuned to—and thus consume more of—other social services, which could confound results. However, the authors were able to track TANF (cash welfare) and SNAP (food stamps) benefits. The data suggest these programs did not factor into observed improvements:
[S]election by the lottery is not associated with any substantive or statistically signiﬁcant change in TANF receipt or beneﬁts. However, lottery selection is associated with a statistically signiﬁcant but substantively trivial increase in the probability of food stamp receipt (1.7 percentage points) and in total food stamp beneﬁts (about $60 over a 16-month period, or less than 0.5% of annual income). (p. 1082)
All that brings us to the trillion-dollar-over-ten-years—the cost of expansion—question: should we look at facilitating expansion of catastrophic coverage instead? It’s a defensible argument and a debate worth having: this is quintessential “insurance functioning as insurance”; it might offer the same psychological benefits from improved financial security. I wasn’t able to find any empirical evidence on this front. I see this resting on two issues: whether a high deductible would undermine feelings of financial security, and how important mental health services were to Oregon’s improvements (we don’t know the answer to either of these questions). And then, the math: would it be more cost-effective than traditional Medicaid? That depends a lot on how you intend to structure such a program.
Proponents of having an HDHP/HSA system in place of traditional Medicaid have pointed to the “Healthy Indiana Program” (HIP), which operated under a Medicaid waiver from 2008 to 2011 and is probably our most instructive example. I’ve read conflicting reports about whether the program is cost-effective: an analysis from the Kaiser Family Foundation found higher per-capita costs than traditional Medicaid when you include individual contributions. A different report from Milliman Inc., asserts that the program achieved lower spending.
HIP was strong in many ways: the program offered first-dollar coverage of preventive services (addressing one of the harshest criticisms of catastrophic insurance), improved access, and it enjoyed overwhelming public support. But it was also subject to conditions that aren’t permitted under the Affordable Care: HIP didn’t cover dental or vision, and had annual ($300,000) and lifetime ($1M) contribution caps. Health reform requires Medicaid to offer these supplemental benefits and eliminates caps for all insurance, public and private. In order to accomodate the ACA, Milliman has confirmed that the program would cost 44% more than a traditional expansion. Also salient: HIP did not cover prescription drugs or other mental health services until the deductible was met; there’s evidence that patients put off care, including drugs, if they must use HSA funds. We can’t rule these out as factors in the Oregon depression findings—to the extent that’s true (another unknown), it’s possible traditional Medicaid performs better for depression than an HDHP/HSA alternative.
Here’s how I see the evidence balance out, though I’m certain others will disagree: Something about traditional Medicaid coverage in Oregon substantially improved depression rates. We don’t know what that is, but it’s associated with the program—improved well-being due to more financial security or “knowledge” of insurance, access to mental health services (both drug and other therapies), or something else I failed to account for. We don’t have similar evidence about catastrophic care and mental health. And Indiana’s experience does not suggest that an HDHP/HSA system would be more cost-effective, so what would we have to change to make that true? Would benefits be preserved?
We don’t understand the differential effects of mental health/drug coverage, financial security, and subjective well-being on depression. Whatever the magic cocktail is, Oregon’s Medicaid has it. The causal uncertainties obviously complicate a full-throated defense of traditional Medicaid’s effect on the disease—but they don’t offer much harbor for the catastrophic insurance argument, either._____________________________
Adrianna works in clinical research and is a graduate student in public policy & public health at the University of Michigan. Follow her on Twitter @onceuponA.