Are we river pilots or rent-seekers?

The bar at the mouth of the Columbia River creates a uniquely dangerous entrance to a major shipping route.  Rapidly changing conditions there have sunk over 2000 large ships in the last 200 years.  Local knowledge is essential to crossing the bar safely.  Turn-by-turn directions from your phone are just no help there, especially when the seas are rough.  As a freighter approaches from the Pacific, a Columbia River Bar Pilot comes on board to navigate through the ten-mile danger zone.  The picture above shows a Columbia Bar pilot arriving by boat to take over the helm. 

As researchers embedded in health systems, we should aspire to operate like river bar pilots.  We have valuable local knowledge gained through years spent navigating our local waters.  Researchers without that knowledge can be easily misled.  Data systems, policies, and practice patterns all have many local variations.  Embedded researchers know about the clinical, technical, and cultural micro-climates that influence the collection and recording of health system data.  We know what research designs and intervention strategies are likely to stay afloat and which ones are certain to sink.  Like the Columbia Bar pilots, we know where the hazards can hide and how to read the changing conditions.  We should aim to share all of that local knowledge to facilitate public-domain research.  As facilitators of a learning healthcare system, we help researchers from elsewhere navigate our local waters safely and efficiently.

There is an alternative scenario we should aspire to avoid.  In that other scenario, embedded researchers would attempt to monetize or profit from health system data.  The economists’ term for that practice is “rent-seeking”.  It is not a term of endearment.  It refers to the practice of charging as much as the market will bear for access to a resource the rent-seeker did not create and does not really own.  Embedded researchers don’t own the data in health system records.  Instead, researchers own their vital local knowledge and experience.

Columbia Bar pilots are paid reasonably well for their knowledge and experience.  Coincidentally, the salary for an experienced Columbia Bar pilot is close to the NIH salary cap for investigators.  Bar pilots do not charge rent to use the river.  Instead, they charge for time spent helping others to navigate it.  As you’d expect in Oregon, the fee that any ship pays to cross the Columbia Bar is set by the Public Utilities Commission.  Large ships do have to pay that fee; they are not permitted to cross without a Bar Pilot on board.  That’s not rent-seeking; it’s a safety precaution.  Sinking your ship on the Columbia Bar creates a hazard for everyone.

Ships crossing the Columbia Bar also depend on significant infrastructure at the mouth of the river.  Local people maintain the jetties, dredge the channel, and operate the lighthouses and radar stations.  Some of that work belongs to the US Army Corps of Engineers, and some belongs to the US Coast Guard.  All of those public services are paid for by our tax dollars.  The public servants who maintain the infrastructure don’t blockade the river or charge whatever the traffic will bear. 

When the salmon are running, the mouth of the Columbia is also a very popular fishing ground.  Bar pilots’ deep knowledge of local conditions is useful for fishing as well as navigating.  But anyone can fish, and the same catch limits apply to all.   Bar pilots may know best where the salmon are running.  But they can’t close the river or restrict access to their favorite fishing spots.  That would make them river pirates rather than river pilots.

Greg Simon

That’s Like Getting Struck by Lightning!

I am a converted skeptic regarding population-based suicide prevention.  Until about 5 years ago, I would have argued that we lacked the two essential ingredients for effective prevention:  accurate tools to identify people at risk and practical interventions to reduce that risk.  I might have even said, “That’s like getting struck by lightning!  How can you predict that?  Even if you could predict it, what could you do to change the weather?”  It turns out that suicide prevention may actually be similar to preventing death by lightning strike – just not in the way I was expecting. 

Over the last 75 years, the number of people killed by lightning strikes in the US has actually fallen by almost 90%.  If we account for growth in the US population over that time, the decline is even more dramatic.  How can we explain that change?  It’s unlikely that lightning now strikes the surface of the United States 90% less often than it did in the 1940s and 1950s.  If the weather is changing, we are seeing more thunderstorms rather than fewer.  It is true that, on average, people now spend less time outdoors than people did 75 years ago.  That’s not necessarily a healthy thing, but it probably accounts for some of the decrease in deaths due to lightning strikes. 

We can, however, point to two deliberate and positive actions contributing to the dramatic reduction in lightning fatalities.  First, weather prediction has significantly improved since the 1940s.  We may not be able to predict the time or location of individual lightning strikes, but we have certainly made progress in predicting the location and time of thunderstorms.  Weather radar really is more accurate than just looking up at the sky.  Second, we can give useful advice about how to stay safe in a thunderstorm.  The Red Cross message is simple:  If thunder roars, go indoors!  If you can’t get indoors, then get inside a car or a truck.  And don’t try to stay dry under tall trees or metal picnic shelters; those things are dangerous in a thunderstorm. 

The parallels to suicide prevention seem clear to me now.  Individual suicide attempts, like lightning strikes, will never be completely predictable.  But we have learned a good bit about identifying people who are at higher risk and identifying periods of high risk.  Both brief self-report questionnaires and information readily available in electronic health records predict risk of subsequent suicide attempt and suicide death.  Those predictions are now accurate enough to inform prevention programs.  Those risk prediction tools are our mental health equivalent of predicting thunderstorms.  And we have learned about some practical steps to recommend at high-risk times.  Promising population-level interventions include systematic outreach (such as caring messages to people with a history of self-harm) and standard care processes (such as creating safety plans with people who report frequent thoughts of self-harm).  Safety planning to address suicide risk, like that Red Cross advice about thunderstorms, emphasizes both finding safe shelter and avoiding dangerous things during risky times.

So I still think that preventing suicide is similar to preventing death by lightning strike.  But now that thought encourages me.

Greg Simon

Can Health Services Defeat Epidemiology?

Can health services defeat epidemiology? This question is not inspired by the School of Public Health summer softball league. Instead, it’s inspired by a conversation with my colleague Ed Boudreaux about screening for suicidal ideation as a tool for preventing suicide attempts and suicide deaths.

Ed wondered whether providers’ or health systems’ responses to screening questionnaires could make those questionnaires appear less accurate. Clinicians are expected to respond to suicidal ideation with more detailed risk assessment, safety planning, and appropriate follow-up care. If those interventions are effective, then risk of subsequent suicide attempt or suicide death will be reduced. And the expected association between suicidal ideation and subsequent suicidal behavior will be weakened. And we might falsely conclude that screening tools are inaccurate.

My first response to Ed was, “We should be so lucky!” The relationship between suicidal ideation and subsequent suicidal behavior is very strong. Unfortunately, our interventions to reduce risk of suicidal behavior are not that strong. We certainly hope to make a dent in the strong relationship between suicidal ideation and subsequent suicide attempt or suicide death. But that would be just a dent.

But Ed’s point is an important one. In fact, it’s central to our recently-funded MHRN project to evaluate implementation of Zero Suicide implementation across health systems – led by Brian Ahmedani at Henry Ford Health System. Following the Zero Suicide scheme, we expect our health systems to implement reliable programs to identify suicide risk, engage people at risk, deliver effective interventions, and assure appropriate care transitions. Our hypothesis is that effective implementation will weaken the association between suicidal ideation (or some other indicator of risk) and subsequent suicidal behavior. Our metrics for evaluating the impact of Zero Suicide programs will look for changes in those relationships over time. For example:  Is the relationship between response to PHQ9 item 9 and risk of subsequent suicide attempt weaker after implementation of systematic risk assessment and safety planning?

Ultimately, this is a “problem” we should embrace. Our epidemiologic research identifies priority areas for improving health services.  In response, our health systems implement new care processes. And those improvements undermine the findings of our epidemiologic research. 

If our previous findings cannot be replicated, that may be good news. Several of us have been involved in over twenty years of research to improve management of depression in primary care. For much of that time, we had no difficulty replicating our findings regarding high rates of treatment discontinuation and low rates of treatment success.  That’s not a history to be proud of.

Greg Simon

Social Determinants of Health: What’s in a Name?

I have a beef with the name “Social Determinants of Health”.

I absolutely agree with putting the word “social” right up front.  It’s a fact that zip code often has a greater effect on health than genetic code.  And the effects of social and environmental factors – such as trauma, loss, and deprivation – are especially relevant to mental health. 
It’s the specific word “determinants” that I’d want to change.  Social factors often have powerful negative or positive influences on physical and mental health.  And the effects of social and environmental factors can certainly overwhelm the treatments we provide.  But the impacts of social and environmental factors are almost always probabilistic rather than deterministic.  The term “social determinants” may sound more powerful than “social influences”, but it is also less accurate.  And the difference between “determinants” and “influences” is not just semantic hair-splitting. 

Adopting a deterministic – rather than probabilistic – view of social influences on health can distract us from places where our research can actually make a contribution.  When we study risk or causation, only a probabilistic view will allow us to understand individual variation in vulnerability and resilience.  In statistical terms, true understanding requires us to move beyond simple questions regarding main effects (Does a specific environmental insult matter on average?) to questions about interactions (For whom does that insult matter more or less?  Under what conditions is the health impact larger or smaller?).  When we examine those interactions, will likely find that the impact of social and environmental insults is even greater among the vulnerable or disadvantaged – those who have already experienced trauma, loss, and deprivation. 

When we develop or test interventions, a probabilistic view focuses us on disrupting specific linkages between social or environmental insults and subsequent mental health problems.  Here again, we can think of interventions as effect modifiers or interactions rather than simply main effects.  For example, we certainly hope that the association between childhood victimization and adult PTSD is not deterministic.  Instead, we hope it can be modified by timely and specific intervention.  Embracing probabilistic complexity should help us to identify interventions to support the most vulnerable by disrupting causal pathways with the biggest public health impact.  That probabilistic – rather than deterministic – view will usually direct resources to those with greatest need.  For example, interventions to address the long-term effects of early childhood trauma will likely have greatest benefit in those who were already disadvantaged.

A deterministic view can easily lead to what my colleague Evette Ludman and I have called sympathetic nihilism.  By that, we mean a well-intentioned but ultimately dispiriting focus on all the reasons for illness and disability – rather than a search for paths to recovery.  Mental health care too often falls into that trap of sympathetic nihilism.

Nevertheless, deterministic thinking can be appealing.  We would all hope to emulate John Snow, the London physician who interrupted an 1854 cholera epidemic with a single dramatic act, removing the handle from the contaminated Broad Street water pump.  In our modern times, social and environmental insults rarely have a single point source.  Instead, the sources of harm are more systemic.

Our closest modern analogue of John Snow is probably Mona Hanna-Attisha, the pediatrician who revealed the devastating effects of lead-contaminated water in Flint, Michigan.  She certainly did advocate for immediate action to interrupt ongoing lead exposure, but there was no single pump handle to remove.  She understood that toxic lead levels in children’s drinking water reflected a complex interaction of governmental decisions about water sources, decaying public infrastructure, and outdated plumbing in individual homes and schools.  The lead poisoning epidemic had no single point source.  So she advocated for governmental action to address systemic problems and educated individual families about reducing exposure.  She also realized that controlling every source of contamination would not reverse the chronic effects of childhood lead exposure.  Repairing those adverse developmental and mental health effects will require long-term therapeutic and rehabilitative interventions. Even if we could find and remove that magic pump handle, many children will be affected for decades to come.

None of us working in mental health will likely face that dramatic and deterministic John Snow scenario.  Instead, like Mona Hanna-Attisha, we regularly face complicated probabilistic scenarios.  To address that complexity, we are called to a range of responses.  Appropriate responses will often include both advocacy to address systemic social causes of poor mental health and a search for effective therapeutic and rehabilitative interventions.  While we will rarely discover that that point source of cholera to eradicate, we can aspire to discover and deliver the mental health equivalent of oral rehydration for cholera – an intervention that’s surprisingly effective, rapidly scalable, and easily affordable.  Developing an effective and scalable intervention certainly does not negate or undermine every person’s right to safe drinking water.  But it does help those who are already sick.

Greg Simon

Who decides what a word means?

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” 

Our collaboration with MHRN health systems to improve depression care has emphasized the systematic use of standard outcome measures –  like the PHQ9 depression scale.  More recently, we have encouraged use of the 9th item of the PHQ9 (regarding thoughts of death or self-harm) as a tool for identifying people at risk for suicidal behavior.  Front-line clinicians and health system managers often ask whether those standard questionnaires can accurately measure depression or predict suicidal behavior across diverse patient populations.

Those questions about questionnaires are typically prompted by concern about wording of specific questionnaire items.  For example:  Does a question about “feeling tired or having little energy” really assess depression in people with diabetes or heart disease?  Does a question about “thoughts you would be better off dead” really assess risk of suicidal behavior in older adults with chronic medical illness? 

A recent MHRN paper led by Rebecca Rossom directly addresses that second question – using a sample of almost 1 million PHQ9 questionnaires completed by almost 300,000 patients in four health systems.  Her team found that response to item 9 of the PHQ9 was a strong predictor of subsequent suicide attempt and suicide death across all age groups, including those aged 65 or older.  Among those reporting frequent “thoughts you would be better off dead or thoughts of hurting yourself in some way”, risk of suicide death over the following two years was actually highest in those aged 65 or older. 

Those data would seem to settle the question.  Reporting “thoughts you would be better off dead” should not be dismissed as a normal part of aging or a normal reaction to chronic illness.  The burdens of chronic illness might certainly contribute to depression and suicidal ideation.  Empathy regarding those burdens is certainly an appropriate response, but a false sense of security is not.

This analysis is also a nice example of using data to escape from semantic arguments that abound in Alice’s Wonderland.  As we move forward with systematic assessment of outcomes in mental health care, we will likely encounter more questions about what a particular word or questionnaire item means.  Rather than responding like Humpty Dumpty, we can ask in return “What data would we need to figure that out?”

Greg Simon

Coordinated Care for First-Episode Psychosis and the End of Gadgets

As MHRN investigators, we often hear from academic researchers hoping to study new psychotherapies or eHealth interventions in our healthcare systems.  Those new interventions typically focus on a specific diagnosis (like obsessive-compulsive disorder) or patient subgroup (like depression in people with arthritis).  But when we bring these ideas to leaders in our care delivery systems, their interest in these specific interventions is often low.  Our health system partners are typically more interested in broader care improvements – like measurement-based care for depression or addressing suicide risk across all diagnoses.

It’s not surprising that researchers based in academic health centers focus on more specific interventions.  Care in academic centers is more often organized around subspecialty areas like obsessive-compulsive disorder or depression in rheumatoid arthritis.  And the typical path of an intervention researcher (from dissertation topic to post-doctoral fellowship to career development award) emphasizes finding a specific clinical niche.  The research grant review process also demands specificity – even if our diagnostic categories are much fuzzier and overlapping than we like to admit.

Thinking about the value of more specific interventions reminded me of a recent New York Times column proclaiming that “The Gadget Apocalypse Is Upon Us”. The premise of the column was that previously separate electronic gadgets have been swallowed by mobile phones.  That premise does seem true for MP3 players, GPS devices, and point-and-shoot cameras.  iPods have certainly faded, and only the most serious photographers now carry around separate cameras.  The music-playing and direction-giving and picture-taking features built into our mobile phones are good enough for most of us.  But not all separate gadgets have disappeared.  Wrist activity monitors seem to be doing just fine.  And new families of gadgets (like the Google and Amazon voice-controlled speakers) are still emerging.  A separate gadget that meets the right need at the right time can still succeed.  Those successful gadgets beg the question:  When is a specialized or dedicated tool (like a narrowly targeted mental health intervention) worth the extra effort or expense?

Asking that question about a new electronic gadget begins with the assumption that nearly everyone is already carrying a mobile phone.  So we’d only carry around a separate gadget if it really improved quality or efficiency.  A dedicated camera can take nicer pictures than my phone.  And a wrist activity monitor might save me from carrying my phone on a run. 

In the same way, asking our MHRN care systems about testing or implementing a specific mental health intervention begins with the assumption that a “generic” mental health infrastructure is already in place.  Geographically organized clinics are staffed with psychiatrists, nurses, psychologists, and other psychotherapists.  Those clinicians serve people seeking care for the full range of problems or diagnoses.  We would want to implement more specific treatments or programs if those specific treatments had real advantages – in either quality or efficiency – over the general-purpose treatments we are already providing.  It would not be convincing to show that a specific treatment or program is superior to no treatment or some “placebo” condition.  Instead, it would be necessary to show advantages over existing general-purpose treatment.  And the benefit of that new clinical “gadget” would have to be large enough to justify the extra expense or effort.

I’m finally getting to the topic of coordinated specialty care for first-episode psychosis.  We have clear evidence that coordinated programs improve outcomes compared to general-purpose care in community mental health centers.  In this case, the coordinated specialty care “gadget” probably does have real added value.  But our behavioral health leaders have not shown much interest in implementing specialized programs.  Our MHRN research shows that first presentations with psychotic symptoms are not rare in our health systems.  While initial engagement in care is high, over half of young people with new-onset psychotic symptoms have dropped out of mental health care within a few months.  From where we (MHRN researchers) sit, it looks like our health systems need a dedicated gadget to improve care for first-episode psychosis.  Now we have some marketing to do – convincing our health system partners that a specialized program would be useful enough often enough to justify the extra effort.

Greg Simon