Meta-Science 101, Part 2: Confirming confirmation bias

“You’re a scientist, and you have a hypothesis.  You really like said hypothesis…”

Believe it or not, simply liking a hypothesis is enough to potentially bias the results of a study toward your favored hypothesis.  This is a well-studied phenomenon known as researcher allegiance bias, and has most commonly been associated with the field of psychotherapy.

In psychotherapy, there are multiple different ways of treating a patient with a mental disorder.  For example, for treating somebody with a fear of spiders, you could tell the patient to identify their negative thoughts surrounding spiders, find those thoughts to be irrational, and replace them with more realistic thoughts.  This would be cognitive therapy.  Alternatively, you could just show them pictures of spiders until they’re comfortable with that, and then gradually work through scarier and scarier things until you’re dumping buckets of spiders over their heads.  This would be (a caricature of) systematic desensitization.

Naturally, different researchers have different favorite techniques, and multiple studies have been done comparing these techniques against each other.  Unfortunately, these studies were apparently hopelessly confounded with researcher allegiance bias, as this discussion of researcher allegiance bias puts well:

“Among studies by investigators identified as favoring cognitive therapy, cognitive therapy emerged as superior; correspondingly, systematic desensitization appeared the better treatment among studies by investigators classified as having an allegiance to systematic desensitization.

What made this pattern especially striking was that the analysis involved comparisons between the same two types of therapy, with the allegiance of the researchers as the only factor known to differ consistently between the two sets of studies.”

(The original meta-analysis that this discussion refers to can be found here).

It’s not a good thing when the theoretical inclinations of a researcher can reliably predict the outcome of a study.  Remember that whole thing about scientific results supposedly being a reflection of how the world works?  Yeahhhhhh.

When I said that researcher allegiance bias was a well-studied phenomenon, I meant it.  The above-mentioned meta-analysis (a systematic overview of primary studies) that found researcher allegiance bias is just one of dozens of meta-analyses done on the topic.  So what does one do when one has dozens of meta-analyses?  That’s right: a meta-meta-analysis!  In 2013, Munder et al. conducted a meta-analysis of 30 different meta-analyses and confirmed that there was a “substantial and robust” association between researcher allegiance and study outcome.

But here’s where it gets crazy.  Munder et al. also found that meta-analyses whose conductors were in favor of the researcher allegiance bias hypothesis–that is, the hypothesis that research allegiance bias is associated with study outcomes–found a greater association between researcher allegiance bias and study outcomes.  In other words, the meta-analyses on researcher allegiance bias themselves were confounded by researcher allegiance bias.1

Awkward.

Note that researcher allegiance bias doesn’t necessarily have to involve conscious intent to manipulate data in favor of your own favorite psychotherapy treatment.  More likely, subtle things like how the researcher designs the competing treatment protocols, how the researcher trains the therapists that will actually be carrying out the treatments, etc. are operative here.  But this just makes the problem of researcher allegiance bias even scarier; what we have to do battle with is not bad actors, but rather fundamental aspects of human psychology.  There have been a number of suggestions on how to moderate the effects of researcher allegiance bias (the same source I quoted above has a good discussion at the end), but I won’t talk about them here, as this blog post is already going to be long enough without addressing fixes of science as well.

Being biased towards one hypothesis over another doesn’t just play itself out in the phenomenon of researcher allegiance bias, however.  Perhaps even more powerful than personal inclination is financial interest; when you have a direct financial stake in seeing the results of your study go one way rather than another, this can have a strong biasing effect.

The most well-researched example of this involves comparing industry-funded clinical trials to independently-funded trials.  If financial interests play a role in biasing research results, we would expect industry-funded trials to show more positive results for the industry sponsor’s drugs than independently-funded trials.  This would be particularly real-world relevant if true, since drug and device companies now fund six times more clinical trials than the federal government.

Since the last time we looked at a meta-meta-analysis went so well, why don’t we do it again?

This meta-meta-analysis, published by the extremely well-regarded independent organization Cochrane in 2012, looked at 48 different papers, each of which themselves compared industry-funded studies to non-industry-funded studies; the total number of primary studies encompassed by this review numbered nearly 10,000.  The authors concluded that industry-funded studies were 32% more likely to find that the drug tested was effective, 87% more likely to find that the drug wasn’t actively harmful, and 31% more likely to come to an overall favorable conclusion for the drug.  These results were more or less in line with several previous meta-meta-analyses done on this topic (yes, there have been several).

Like with researcher allegiance bias, industry sponsorship bias seems to often be instantiated via study design.  For example, this can be done with more frequent testing against placebos than against active controls, resulting in an easier bar to clear for a drug to be considered “effective” by the study, or by using lower doses to mask adverse effects of the drug.  Whether or not these are conscious study design choices to boost the desirability of a drug I’ll leave up to the reader to decide; the bottom line is that, regardless, we know that industry funding introduces a real bias that ends up affecting the results of a study.


Continue to Part 3: P-hacking your way to publication >>>

1The snarky response here is that Munder et al. were obviously biased in favor of researcher allegiance hypothesis hypothesis, the hypothesis that researchers with an allegiance to researcher allegiance hypothesis are more likely to find associations between researcher allegiance hypothesis and study outcome.  Munder et al. can’t be trusted!  We need a meta-meta-meta-analysis!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s