During the last month, I found myself in several conversations about promoting open science. We hope that sharing data, research methods, and early results will make research more rigorous and reproducible. But those conversations all turned to the fear that data or preliminary results could be misinterpreted or even deliberately misused. How can we protect the public from misleading “junk science”?
Some examples: The NIH All of Us Research Program will create an online data enclave where researchers from around the world can analyze questionnaires, medical records, and genomic data from hundreds of thousands of Americans. What will protect against dredging through millions of possible genetic associations to find spurious evidence for racist theories? The medRxiv online platform aims to accelerate scientific discovery and collaboration by posting research results prior to peer review. What will protect against posting and publicizing flawed or fraudulent research that peer review would filter out? Our Mental Health Research Network hopes to share data so other researchers can develop new methods for identifying risk of suicidal behavior. What will protect against naïve or intentional conflation of correlation with causation? People who get more mental health treatment will be more likely to attempt suicide, but describing the correlation in that order is misleading.
Will open science inevitably lead to more junk science?
Our potential protections against junk science include both prevention and remediation. Prevention depends on gatekeeping by institutions and designated experts. We hope to prevent creation of junk science by peer review of research funding. We hope to prevent dissemination of junk science by peer and editor review of journal manuscripts. Remediation or repair happens after the fact and depends on the collective wisdom of scientists, science journalists, and policymakers. We hope that the wisdom of the scientifically informed crowd will elevate credible research and ignore or discredit the junk.
I am generally skeptical about intellectual or scientific gatekeeping, especially when it involves confidential decisions by insiders. Our academic gatekeeping processes intend to identify and exclude junk science, but they often fall short in both sensitivity and specificity. Junk certainly does get through; the peer-reviewed medical literature includes plenty of biased or seriously flawed research. If you want me to cite examples, that would have to be an off-the-record conversation! And peer review sometimes excludes science that’s not junk, especially if methods are unconventional or findings are unwelcome. Those who created conventional wisdom tend to reject or delay challenges to it.
But abandoning gatekeeping and relying on after-the-fact remediation seems even more problematic. Recent disinformation disasters don’t inspire confidence in the scientific wisdom of crowds or our ability to elevate good science over newsworthy junk. Medical journals are certainly not immune to the lure of clickbait titles and social media impact metrics. Media reporting of medical research often depends more on dramatic conclusions than on rigorous research methods. Systematic comparisons of social media reporting with scientific sources often find misleading or overstated claims. Plenty of discredited or even fraudulent research (e.g. vaccines causing autism) lives forever in the dark corners of the internet. To paraphrase a quotation sometimes attributed to Mark Twain: Junk science will be tweeted halfway around the world before nerds like me start clucking about confounding by indication and immortal time bias.
I think some gatekeeping by designated experts will certainly be necessary. But we can propose some strategies to improve the quality of gatekeeping and reduce bias or favoritism. Whenever possible, gatekeeping decisions should be subject to public scrutiny. Some journals now use an open peer review process, publishing reviewers’ comments and authors’ responses. Could that open process become the norm for peer review of research proposals and publications? Whenever possible, gatekeepers should evaluate quality of ideas rather than the reputations of people who propose them. Should all journal and manuscript reviewers be blinded to authors’ identities and institutional affiliations? Whenever possible, gatekeeping decisions should be based on quality of methods rather than comfort with results. Could journals conduct reviews and make publication decisions based on the importance of the study question and rigor of the research methods – before viewing the results? Most important, gatekeepers should cultivate skepticism regarding conventional wisdom. Perhaps the criteria for scoring NIH grant applications could include “This research might contradict things I now believe.” To be clear – that would be a point in favor rather than a point against!
Greg Simon