Read Marsha Linehan’s Book!

If the Mental Health Research Network had a book club, we’d start with Marsha Linehan’s memoir, Building a Life Worth Living.

Marsha is the creator of Dialectical Behavior Therapy or DBT, a treatment approach once seen as heretical that’s now the standard of care for people at risk of self-harm or suicide. Her memoir is much more than an academic autobiography, describing her groundbreaking research at the University of Washington. It is her personal story of recovery and spiritual evolution. The history of DBT doesn’t begin with a post-doctoral fellowship or a research grant for a pilot study. Instead, it begins with Marsha’s descent into severe depression and relentless suicidal ideation, leading to a two-year inpatient psychiatric stay as “one of the most disturbed patients in the hospital.”

Marsha’s remarkable story pushed me to think about all sorts of questions regarding mental health research and mental health treatment: Why does our clinical research often focus on reconfirming the modest benefits of treatments that are so disappointing? How could the scientific peer review system welcome true innovation rather than comfortable confirmation? Do mental health clinicians’ traditional “boundaries” really serve to protect patients from harm – or more to protect clinicians from upset or inconvenience? How central are spirituality and religion to recovery from mental illness? And – where would we be today if Marsha Linehan had chosen a traditional religious order over psychology graduate school?

For me, the book’s most valuable lesson was understanding the dialectical center of DBT. Dialectical thinking – holding two seemingly opposite ideas at the same time – is central to Marsha’s treatment approach and her life story. Following the first rule of good writing (“Show, don’t tell”), Marsha generously describes her own intellectual and spiritual journey to dialectically embracing the tension between radical acceptance and hunger for change. Her message to her clients is “I accept you as you are; how you feel and what you do make perfect sense.” And her message is also “There is a more effective way to be who you are and feel what you feel. Wouldn’t you like to learn about it?” Both are completely true, at the very same time. The tension between acceptance and change is not a problem to be solved but a creative space to inhabit – or even dance inside of. Marsha also has some important things to say about dancing!

Marsha reveals her dialectical approach most clearly in describing her own mental health treatment. She endured over-medication, forcible restraint, and weeks spent in a seclusion room. Her descriptions of those traumas are vivid, but they include not the slightest tinge of blame or resentment. Instead, she gracefully expresses gratitude and compassion for her caregivers, knowing they were doing the best they could with the knowledge and skills they had at the time. That is truly radical acceptance. At the same time, Marsha was passionate for change. She vowed that “I would get myself out of hell – and that once I did, I would find a way to get others out of hell, too.” It seems that the mental health care system is Marsha’s last client. And I think she is practicing a little DBT on mental health clinicians like me – compassionately accepting all of our failings and flailings while showing us a better way.

Greg Simon

Let’s not join the Chickens**t Club!

Long before he became famous or infamous (depending on your politics) as FBI Director, James Comey served as US Attorney for the Southern District of New York.  That’s the office responsible for prosecuting most major financial crimes in the US.  Jesse Eisinger’s book, The Chickens**t Club, recounts a speech Comey made to his staff after assuming that high-profile post.  He asked which of his prosecutors had never lost a case at trial, and many proudly raised their hands.  Comey then said, “You are all members of what we like to call The Chickens**t Club.”  By that he meant:  You are too timid to take a case to trial unless you already know you will win.

I worry that our clinical trials too often follow the same pattern as those white-collar criminal trials.  When we evaluate new treatments or programs, we may only pursue the trials likely to give the answer we hope for.  That might mean testing an intervention only slightly different from one already proven effective.  Or testing a treatment in an environment where it’s almost certain to succeed.  Our wishes come through in our language.  Trials showing that a new treatment is superior are “positive”, while trials finding no advantage for a new treatment or program are “negative.”

Our preference for trials with “positive” results reflects how researchers are rewarded or reinforced.  We know that positive trials are more likely to be published than so-called negative trials.  Grant review panels prefer positive research that yields good news and gives the appearance of continuous progress.

Looking back on my career, I can see some trials that might have put me in The Chickens**t Club.  For example, we probably didn’t need to do several clinical trials of collaborative care for depression in one health system before taking that idea to scale.

But here’s where the analogy between investigators and prosecutors does not apply:  A prosecutor in a criminal trial is supposed to take sides.  An investigator in a clinical trial shouldn’t have a strong preference for one result or the other.  In fact, clinical investigators have the greatest obligation to pursue trials when the outcome is uncertain.  “Equipoise” is the term to describe that balanced position.

Greg Simon

From 17 years to 17 months (or maybe 14)

Most papers and presentations about improving the quality of health care begin by lamenting the apocryphal 17-year delay from research to implementation.   The original evidence for that much-repeated 17-year statistic is pretty thin: a single table published in a relatively obscure journal.  But the 17-year statistic is frequently cited because it quantifies a widely recognized problem.  Long delays in the implementation of research evidence are the norm throughout health care.

Given that history, I’ve been especially proud of our MHRN health systems for their rapid and energetic implementation of suicide prevention programs.  This month, Kaiser Permanente Washington started implementing MHRN-developed suicide risk prediction models.  Therapists and psychiatrists at our Capitol Hill Mental Health clinic in Seattle now see pre-visit alerts in the electronic health record for patients who are at high risk of a suicide attempt over the next 90 days.  Implementation started only 17 months ─ not 17 years ─ after we published evidence that those models accurately predict short-term risk of suicide attempt or suicide death.

I certainly have experience developing other evidence-based interventions that are still waiting for wide implementation – some for longer than 17 years.  For those interventions, our published papers, PowerPoint presentations, and impressive p-values never prompted much action.  Our experience with suicide risk prediction models has been almost the opposite: health system leaders are pushing for faster implementation.  For months we’ve received frequent queries from regional and national Kaiser Permanente leaders: “When are you rolling out those risk prediction scores in our clinics?” 

In the middle of celebrating Kaiser Permanente’s rapid implementation of risk prediction models, I learned that our colleagues at HealthPartners were three months ahead of us.  I hadn’t realized that health plan care managers at HealthPartners have been using MHRN risk prediction scores to identify people at high risk for suicidal behavior since July.  In true Minnesota fashion, they just did the right thing with no boasting or fanfare.  Reducing the publication-to-implementation gap from 17 years to 14 months doesn’t have quite the same poetic symmetry as reducing it to 17 months.  But I’ll take faster progress over poetic symmetry.

Why did we see so little delay in our health systems’ implementation of suicide risk prediction models and other suicide prevention programs?  We didn’t need to do any marketing to create demand for better suicide prevention.  Health system leaders were clear about priorities and problems needing solutions.  The specific demand regarding risk prediction models was clear:  The questionnaires providers were using to identify suicide risk were better than nothing, but far short of satisfactory.  Knowing the limitations of the tools they were using, clinicians and health system leaders asked:  Can you build us something better?  We weren’t telling health system leaders what we thought they needed.  We were trying to build what they told us they needed.

When I now hear researchers or intervention developers lament the 17-year delay, I ask myself how that complaint might sound in some other industry.  I imagine giving this report to a board of directors: “We developed and tested an innovative product that our customers really should want.  We’ve spent 17 years telling them about it, but our market share is still trivial.  What’s wrong with our customers?”  I doubt the board of directors would say, “Let’s keep doing the same thing!”  Instead, they’d probably ask, “What are our customers trying to tell us about what they need?”

Greg Simon

Outreach is Meant for the People Left Outside!

Several years ago, Evette Ludman and I undertook a focus group study to learn about early dropout from psychotherapy.  We invited health system members who had attended a first therapy visit for depression and then did not return.  Only about one-third of people we invited agreed to speak with us, but that’s a pretty good success rate for focus group recruitment.

We soon learned, however, that the one-third of people who joined our focus group were not the people we needed to hear from.  Many were veterans of long-term psychotherapy who had returned for a single “refresher” visit.  Some had been seeing therapists in private practice (using other insurance) and scheduled a visit with a new therapist to explore other options.  None were people starting treatment for depression who gave up after a single visit.  Our first focus group turned into a hypothetical discussion of why some other people might give up on therapy after just one visit. 

In retrospect, we should have realized that we wouldn’t learn much about giving up on psychotherapy from people who volunteer to join a focus group about psychotherapy.  People living with depression who are frustrated or discouraged about treatment don’t tend to become motivated research volunteers.  We probably should have published something about that experience, but I’m still waiting for someone to establish The Journal of Instructive Failure.

That instructive failure, however, did shape our subsequent research about outreach to increase  engagement in mental health treatment.  Outreach and engagement interventions have been a major focus of our research, but we don’t study engagement interventions among people who are already engaged.  We aim to reach people who are disconnected, discouraged, and convinced that treatment has nothing to offer.  Volunteering to participate in research to increase engagement in treatment should probably make someone ineligible for research on that topic.  For example:  If we hope to learn whether outreach to people at risk can reduce suicide attempts, we certainly shouldn’t limit our research to people who volunteer for a study of outreach to prevent suicide attempt.

If we hope to find those who have been lost, we’ll have to look outside of the bright light under the lamppost.  So our studies of outreach or engagement interventions follow a “randomized encouragement” design.  We identify people who appear to need services but are not receiving them.  We randomly assign some people to receive extra outreach, such as messages and phone calls to offer support and problem-solve barriers to getting mental health care.  The rest continue to receive their usual care. 

That real-world research design answers the question we care about:  Among people who appear to have unmet need, will implementing an outreach intervention increase engagement in treatment – and ultimately lead to better outcomes.  That’s the design of our MHRN Suicide Prevention Outreach Trial, testing two outreach interventions for people at risk of suicidal behavior.  And it’s the design of our MHRN pilot study of Automated Outreach to Prevent Depression Treatment Dropout, testing systematic outreach to people who appear to have discontinued medication or dropped out of psychotherapy.

That real-world randomized encouragement design does impose some requirements, but I think they are features rather than bugs.  First, we must be able to identify people with unmet need before they ask us for help.  That’s been a central focus of our MHRN research, including our recent research on predicting suicidal behavior.  Second, we must be able to use health system records to assess any impact or benefit.  Relying on traditional research interviews or surveys would take us back to the problem of assessing outreach or engagement among people who volunteer to participate in research interviews or surveys.  Third, any benefit of an outreach or engagement intervention is diluted by absence of benefit in those who do not participate.  But that diluted effect is the true effect, if what we care about is the real-world effect of an outreach program.

Outreach interventions might seem to work well right under the lamppost, but that’s not where people get lost or left out.

Greg Simon

Marianne Williamson vs. the DSM-5

Psychiatric epidemiology has become a Presidential campaign issue!  Marianne Williamson has taken some heat for her past claim that diagnosis of clinical depression is “such a scam.”  She has backed away from that statement, but she’s stood by her point that “There is normal spectrum of human despair; it is a spiritual, not a medical issue.” And she’s stood by her claim that antidepressants are over-prescribed when “people are simply sad.”  Is there really any difference between depression and ordinary sadness?  Are antidepressants being prescribed inappropriately for “normal human despair”?

The American Psychiatric Association’s Diagnostic and Statistical Manual (or DSM-5) attempts to draw a line between depressive disorders and “periods of sadness that are inherent aspects of the human experience”.  But the line DSM-5 draws can seem artificial.  Why is a major depressive episode defined by five of nine specified symptoms rather than by four or six?  And why must that episode last at least two weeks rather than 13 days or 15 days?  There are no sharp boundaries, and Williamson’s critique about a “normal spectrum of human despair” seems reasonable.

But I think Marianne Williamson’s dismissal of depression diagnoses is built on a false premise: that real “medical” disorders are defined by sharp boundaries or discontinuities.  In fact, most chronic diseases lie at one end of a normal spectrum.  Diabetes, for instance, lies at the high end of the fasting glucose spectrum.  Hypertension, to offer another example, lies at the high end of the blood pressure spectrum.  If we require a sharp boundary or discontinuity to define a chronic illness, then most major causes of disability and premature death would be re-classified as part of a normal spectrum.  In general, chronic illnesses are not caused by a single genetic error or by unbounded growth of a single rogue cell.  Instead, chronic illnesses typically result from faulty regulation of complex systems.  Whether or not faulty regulation leads to illness and disability depends on social environment, life events, health behaviors, and (as Williamson has noticed) spiritual practices.  That’s true of depression, but it’s also true of heart disease or diabetes.

Williamson has pointed out that “There is no blood test” to diagnose depression.  But the same is true of Parkinson’s Disease.  The brain is more complicated than the liver or kidneys, and blood tests are just not useful to assess how it is functioning. 

While I think Williamson’s arguments regarding diagnosis of depression are fundamentally flawed, I also see fundamental problems with the DSM-5 (or ICD-10) scheme for separating depressive disorders from ordinary human experience.  DSM-5 criteria for diagnosing depression use one set of rules to address two very different, albeit related, questions. 

The first question is about the essential features of depression.  While the borders of depression may not be sharp, the core is well defined.  The central psychological, behavioral, and physiological features of depression are remarkably consistent across language, culture, and social environment.  Depression is not a social construction or a modern spiritual malaise.  Instead, it’s a well-described disorder and the leading cause of disability worldwide.  The DSM-5 gets that question right.

The second question is about drawing the line where depression becomes an illness or a disorder.  Not only are there no sharp boundaries, but the answer depends on the situation.  When is depression severe enough to warrant treatment?  That depends on the potential benefits and risks of the treatment.  The DSM-5 can’t get that question right, because the answer is “It depends.”  The same question about a severity threshold for diagnosis or treatment applies to hypertension, diabetes, or nearly any chronic illness.  And the answer is “It depends.”

Regarding Williamson’s claim that antidepressants are over-prescribed for “normal sadness”, we can point to some relevant epidemiology research.  Data from our MHRN health systems find very little prescribing of antidepressants for minimal or mild symptoms of depression.  While experts may disagree on the specific threshold above which benefits of antidepressants are clear, most people starting antidepressant medication have moderate or severe symptoms by our standard measures.

I’m a clinician and a researcher, so I’m reluctant to wade into the intersection of politics and spirituality.  That’s risky territory, so I’ll step into it cautiously.  I’ll stay in that territory just long enough to pull the diagnosis of depression back where it belongs – into the boring terrain of epidemiology.  For better or worse, the upcoming Presidential debates will probably not include any epidemiology questions.

Greg Simon

Who Decides when Science is Junk?

Photo by Ted Reingold. Creative Commons permission for non-commercial use with attribution.

During the last month, I found myself in several conversations about promoting open science.  We hope that sharing data, research methods, and early results will make research more rigorous and reproducible.  But those conversations all turned to the fear that data or preliminary results could be misinterpreted or even deliberately misused.  How can we protect the public from misleading “junk science”?

Some examples: The NIH All of Us Research Program will create an online data enclave where researchers from around the world can analyze questionnaires, medical records, and genomic data from hundreds of thousands of Americans.  What will protect against dredging through millions of possible genetic associations to find spurious evidence for racist theories?  The medRxiv online platform aims to accelerate scientific discovery and collaboration by posting research results prior to peer review.  What will protect against posting and publicizing flawed or fraudulent research that peer review would filter out?  Our Mental Health Research Network hopes to share data so other researchers can develop new methods for identifying risk of suicidal behavior.  What will protect against naïve or intentional conflation of correlation with causation?  People who get more mental health treatment will be more likely to attempt suicide, but describing the correlation in that order is misleading. 

Will open science inevitably lead to more junk science?

Our potential protections against junk science include both prevention and remediation.  Prevention depends on gatekeeping by institutions and designated experts.  We hope to prevent creation of junk science by peer review of research funding.  We hope to prevent dissemination of junk science by peer and editor review of journal manuscripts.  Remediation or repair happens after the fact and depends on the collective wisdom of scientists, science journalists, and policymakers.  We hope that the wisdom of the scientifically informed crowd will elevate credible research and ignore or discredit the junk. 

I am generally skeptical about intellectual or scientific gatekeeping, especially when it involves confidential decisions by insiders.  Our academic gatekeeping processes intend to identify and exclude junk science, but they often fall short in both sensitivity and specificity.  Junk certainly does get through; the peer-reviewed medical literature includes plenty of biased or seriously flawed research.  If you want me to cite examples, that would have to be an off-the-record conversation!  And peer review sometimes excludes science that’s not junk, especially if methods are unconventional or findings are unwelcome.  Those who created conventional wisdom tend to reject or delay challenges to it.

But abandoning gatekeeping and relying on after-the-fact remediation seems even more problematic.  Recent disinformation disasters don’t inspire confidence in the scientific wisdom of crowds or our ability to elevate good science over newsworthy junk.  Medical journals are certainly not immune to the lure of clickbait titles and social media impact metrics.  Media reporting of medical research often depends more on dramatic conclusions than on rigorous research methods.  Systematic comparisons of social media reporting with scientific sources often find misleading or overstated claims.  Plenty of discredited or even fraudulent research (e.g. vaccines causing autism) lives forever in the dark corners of the internet.  To paraphrase a quotation sometimes attributed to Mark Twain: Junk science will be tweeted halfway around the world before nerds like me start clucking about confounding by indication and immortal time bias.

I think some gatekeeping by designated experts will certainly be necessary.  But we can propose some strategies to improve the quality of gatekeeping and reduce bias or favoritism.  Whenever possible, gatekeeping decisions should be subject to public scrutiny.  Some journals now use an open peer review process, publishing reviewers’ comments and authors’ responses.  Could that open process become the norm for peer review of research proposals and publications?  Whenever possible, gatekeepers should evaluate quality of ideas rather than the reputations of people who propose them.  Should all journal and manuscript reviewers be blinded to authors’ identities and institutional affiliations?  Whenever possible, gatekeeping decisions should be based on quality of methods rather than comfort with results.  Could journals conduct reviews and make publication decisions based on the importance of the study question and rigor of the research methods – before viewing the results?  Most important, gatekeepers should cultivate skepticism regarding conventional wisdom.  Perhaps the criteria for scoring NIH grant applications could include “This research might contradict things I now believe.”  To be clear – that would be a point in favor rather than a point against!

Greg Simon

Who Owns the Future of Suicide Risk Prediction?

PLAY INTERVIEW 

On my plane rides to and from a recent meeting about big data in suicide prevention, I finally read Jaron Lanier’s 2013 book Who Owns the Future?  If you’ve read it -or read much about it – you can skip the rest of this paragraph.  Lanier is a legendary computer scientist often cited as the creator of virtual reality.  The first half of his book (my eastbound reading) is a Jeremiad about the modern information economy de-valuing human work and hollowing out the middle class.  Jobs are disappearing, income inequality is increasing, and personal information is devoured and monetized by big tech companies.  While Lanier gives a free pass to scientific users of big data, I think some of his criticisms still apply to our work.  The second half of the book (my westbound reading) proposes a solution.  It’s a new economic model rather than a new regulatory structure.  Lanier argues that those who create useful information should be paid by those who profit from it.  If you discover a secret shortcut around rush-hour traffic, then Google Maps may route other people to follow you.  Your shortcut might be ruined tomorrow, but your bank account would show a micro-payment from Google for “selling” your discovery to other drivers.  If you are an expert discoverer of efficient driving routes (like my wife), then Google might pay you over and over for the valuable driving data you create.    

Of course, individuals can have different preferences about sharing information.  In Lanier’s scheme, each person could own their data and set their own price.  Some people could choose to donate data for public good – not expecting any compensation.  That would be their perfect right.  At the other extreme, some people could set a price so high that no one would be willing to pay.  That would also be their perfect right.

In contrast to many big data skeptics, Lanier does not argue for requiring “opt-in” consent for data use.  In fact, he argues that the “Terms and Conditions” and “End User License Agreements” we click through so frequently are the antithesis of informed consent.  If we are asked for permission more and more often, we will just pay less and less attention to each request.  He proposes fair compensation as a just and more respectful alternative.

Lanier does admit that micro-payments for data use could be more of a thought experiment than a practical solution.  Even if our thousands of daily decisions create value for someone else, we don’t have time to send out all of those tiny invoices.  But Lanier thinks the internet could manage all that accounting for us – if we decided that it should.  And he certainly knows more than I do about what the internet can do.

Even if Lanier’s proposal is just a thought experiment, it did get me thinking.  His central point is that most so-called artificial intelligence is simply re-packaging of decisions by large numbers of old-fashioned intelligent humans.  People discover and create.  Machines just copy and aggregate.  Our current economic rules reward the copying and aggregating rather than the original discovery and creation.  If humans created the original value, shouldn’t they share the eventual profits? 

At first, I couldn’t see how Lanier’s proposed solution would apply to our use of health records data for suicide prevention or other public health research.  We don’t sell or profit from health information.  Answering scientific or public health questions won’t destroy middle-class jobs in health care.  But (one of the few good things about a 5-hour plane ride), I eventually saw how his proposal applies.  Even if no money changes hands when we use health data, we could do a better job acknowledging patients as the true creators or authors of health knowledge.  My friends at the Depression and Bipolar Support Alliance helped me understand that people who live with mood disorders are “experts by experience.” That expertise often goes unrecognized.

If we are using health records data to predict or understand suicidal behavior, then data from every member of our health systems could be useful.  But data from people who attempted suicide or died by suicide would be especially valuable.  We’d also highly value data from people at high risk who did not attempt or die by suicide.  Living with suicidal thoughts is hard work. 

Lanier’s proposed economic model reminds us who created the expertise that lives in health data.  Since it’s close to the 4th of July, I can’t resist a reference to “of the people, by the people, and for the people.” 

Greg Simon

Are the Kids Really Alright?

Play 

If you’re my age, “The Kids Are Alright” names a song by The Who from their 1966 debut album, My Generation.  If you’re a generation younger, it names a movie from 2010.  If you’re even younger, it names a TV series that debuted just last year.  What goes around comes around – especially our worry that the kids are not alright.

That worry often centers on how the kids’ latest entertainment is damaging their mental health.  In the 1940s, the villain was comic books.  Prominent psychiatrist Fredric Wertham testified to the US Congress that “I think Hitler was a beginner compared to the comic book industry.” In the 1960s and 1970s, it was the kids’ music – especially songs like “My Generation” with lyrics like “I hope I die before I get old.”  In the 2010s, our concern is about screen time – especially time kids spend on social media and gaming.  Last year, the World Health Organization added Gaming Disorder to the addictions chapter of the International Classification of Diseases.

Evidence regarding screen time and kids’ mental health is mixed – both in quality and conclusions.  Some have pointed to population increases in screen time alongside population increases in suicidal behavior, but that population-level correlation may just be an example of the ecological fallacy.  Some large community surveys have found an individual-level correlation between self-reported screen time and poorer mental health. But that correlation could mean that increased screen time is a cause of poorer mental health, a consequence of it, or just a coincidence.  A team of Oxford researchers used data from actual logs of screen time and used more sophisticated analytic methods to account for other differences between heavier and lighter device users.  Those stronger methods found minimal or no correlation between screen time and teens’ mental health. 

A definitive answer would require a true experiment – randomly assigning thousands of kids to higher or lower levels of screen time.  That will certainly never happen.  If we are left with non-experimental or observational research, then we should ask a series of questions:  Did the researchers declare or register their questions and analyses in advance?  How representative were those included in the research?  How accurately were screen time and mental health outcomes measured?  How well could the researchers measure and account for confounders (other differences between frequent and less frequent device users that might account for mental health differences)?  We should also recall a general bias in mental health or social science research:  Findings that raise alarm or confirm widely-held beliefs are more likely to be published and publicized.

Nevertheless, increasing rates of youth suicide are a well-established fact.  And community surveys do consistently show rising levels of distress and suicidal ideation among adolescents.  And I don’t think we’ve seen any evidence that time on social media or gaming has a positive effect on mental health or suicide risk.

The evidence to date raises questions but doesn’t give clear answers.  For now, I’ll remain cautious – both in what I’d recommend about screen time and what I’d say about the data behind any recommendation.  It’s certainly prudent to recommend limits on screen time.  But it’s not prudent to claim much certainty about cause and effect.  Given the evidence we have, any headline-grabbing alarms would not be prudent.  I’d prefer not to be remembered as the Dr. Frederic Wertham of my generation.

Greg Simon

Bebe Rexha can Call Herself Whatever She Wants

A few weeks ago, the pop star Bebe Rexha told the world via Twitter:
I’m bipolar and I’m not ashamed anymore. That is all. (crying my eyes out.)”.)
— Bebe Rexha (@BebeRexha) April 15, 2019

I cheered about a talented and successful woman announcing that she is not ashamed to live with bipolar disorder.  But I initially paused when reading the words “I’m bipolar.”  That’s the sort of language that many mental health advocates discourage. I’ve been trying to unlearn language like that.

In the past, our language about mental health conditions often seemed to reduce people to diagnoses.  We used to say (without thinking) that a person “is bipolar” or “is schizophrenic.  Or, even more reductionistic, we would refer to someone as “a bipolar” or “a schizophrenic.”

The disability rights community has sought to overcome such stereotyping by encouraging “people-first language.” The point is that people with a health condition or disability are  – first and foremost –  people.  At the same time that someone is a person with bipolar disorder, they’re also a person with an interesting job.  Or a person with an adorable cat.  Or a person with a fantastic singing and song-writing talent.  Or a person with a neighbor who uses the leaf blower too early on Sunday morning.  People who have bipolar disorder have lots of other ordinary and extraordinary things in their lives, just like other people do.

But it’s really none of my business how Bebe Rexha wants to name or define herself.  If she’s trying to break down stigma, then it may be helpful to confront it as directly as possible.  Many marginalized groups have made a point of re-claiming or re-purposing labels that were historically pejorative or stigmatizing.  The “Mad Pride” movement of mental health consumers is one example, re-claiming words like “mad” and “nuts” to disrupt stigma and cast off shame.

As for me, though, I’ll stick with the people-first language.  I’ll be careful to say that a person has bipolar disorder or lives with bipolar disorder.  And if one of my patients says that they are bipolar, I might even explain why I don’t say it that way.  But I certainly wouldn’t tell them – or Bebe Rexha – what words to use.  After all, Kaiser Permanente’s anti-stigma campaign is called “Find Your Words”, rather than “Use Our Words”.  Bebe Rexha and anyone else can use their own words to talk about their own mental health.  The right words are the ones that make it easiest to speak up.

Greg Simon

Is it Too Soon to Move the Tomato Plants Outside?

It’s the time of year when backyard gardeners start to think about transplanting tomato seedlings from that tray in the sunny part of the kitchen to the real garden outside.  Moving outdoors too soon is risky.  Those little seedlings could get beaten down by the cold rain, nibbled by the escaped Easter bunnies running wild in my neighborhood, or decimated by cutworms.  Gardeners in the Northeast and Midwest even need to worry about late-season snow.  But you don’t want to wait too long, or you’ll end up with root-bound, leggy plants and a disappointing tomato crop.

Those of us who develop mental health interventions often keep them indoors too long.  When I look back on the history of Collaborative Care for depression, I think we waited too long before moving that intervention to the outdoor garden.  We completed four trials of Collaborative Care within Group Health Cooperative (now Kaiser Permanente Washington) before the large, multi-site IMPACT trial-tested Collaborative Care across the great outdoors of eight health systems.  Perhaps IMPACT could have happened five years (and two randomized trials) earlier. 

For those who develop new treatments or programs, it’s only natural to want to keep those delicate young plants indoors.  Spring weather is unpredictable, and a promising new treatment or program might fail for random reasons.  We might relax our over-protectiveness if we could learn more from those “failures”.  I’ll again cite the IMPACT trial as an example to follow.  Collaborative Care for depression proved effective across a variety of health systems, patient populations, and enrollment strategies.  But that consistency was not a presumption or even a goal.  If Collaborative Care had “failed” in some settings, we could have learned valuable lessons for future implementation.  If we intentionally plant our tomato seedlings in all sorts of conditions, some might not flourish.  But we might learn things we can implement next spring. 

Greg Simon