Meta-Science 101, Part 9: A conclusion

Let’s review.

Science in theory is pretty great, but in practice, science is done by people.  People have flaws, and these flaws infiltrate scientific practice.  Confirmation bias translates into researcher allegiance bias; you’re more likely to obtain a positive result if you believe in the hypothesis you’re testing.  The reasons for this may be due to subconscious processes and are hard to tease out, but part of it is probably due to p-hacking, or using shady statistical tricks to get your p-value down below the 0.05 threshold generally used to distinguish publishable from non-publishable results.  Researchers are partly incentivized to p-hack due to the increasingly hypercompetitive space in academia, driven by the ballooning of grant applications while federal funding has plateaued.  You would hope that peer review would catch studies relying on these p-hacking techniques before they’re able to contaminate the published literature, but the peer review process itself is riddled with randomness and bias, and is not exactly a reliable filter that lets only high-quality studies through.  All of this means that a lot of studies don’t replicate, and most published research findings may be false.  Going up to the level of meta-analysis doesn’t solve all your problems, because even good meta-analyses of good studies can be driven to the wrong conclusions due to publication bias; positive results are more likely to be included in meta-analyses because they’re actually published, while negative results languish in file drawers.  And, as Bem and the field of parapsychology have been proving for many years now, even really really good meta-analyses that take publication bias into account can lead to conclusions that the rest of us are pretty sure are false, like that events in the future can reach back in time and influence us subconsciously in the present.

…Whew.  Based on that summary, science seems in pretty bad shape right now; most of the research at the bottom of the epistemic pyramid is probably wrong and even the top has been infiltrated by snakes.  But I don’t want to leave you with an overly skeptical view of science.  There are a few caveats I have to make about everything I’ve said so far.

First: Some of you may have noticed the apparent irony in me relying on scientific studies to criticize science.  The reason is this: Using science is better than not using science.  Science may have its problems, but it’s still the best tool we have.  I mean, when it comes to making claims about the world, we can either: a) systematically test those claims by throwing them against the world, seeing how the world responds, and trying to interpret the response the best we can, or b) rely on our intuitions, which are a product of evolutionary processes that optimized for survival and reproduction in the ancestral environment, not for truth-finding in the modern world.  As bad as we might be at a), I’ll choose a) over b) any day.

Second: Science isn’t actually as bad as I made it out to be.  In my digging into the meta-science literature over the past month or so, I kept finding little nuggets of good news, and I would be remiss if I didn’t mention them here.  You know that meta-meta-analysis of researcher allegiance bias that I mentioned wayyyy back in Part 2, the one that found a “substantial and robust” association between researcher allegiance and study outcome?  Well, by “substantial” it meant that researcher allegiance bias only explained 7% of the variance between study outcomes.  This study on p-hacking found that, while p-hacking is widespread in the scientific literature, it probably doesn’t affect the conclusions of meta-analyses all that much, since p-hacking is more common in studies with small sample sizes, which are given less weight in meta-analyses anyway.  This study comes to a similar conclusion on publication bias; although widespread, it may only affect the conclusions of ~10% of meta-analyses.  Finally, contrary to what the Reproducibility Project found on their replications of 100 psychology papers, ~70-80% of replications in psychology overall are successful.  So while science may have problems, we have to be honest about the scale and impact of these problems as well.

Third: A lot of the problems with science are concentrated in two fields: psychology and biomedicine.  If you’re looking at literature outside those fields, you can be way more confident in the results.  Heck, in some fields the problems I mentioned with science don’t even apply.  In my own field of synthetic organic chemistry, for example, basically what I do is “make thing, report that I made thing.”  There are no p-values involved, so p-hacking isn’t an issue–and researcher allegiance bias has its work cut out for it, since it would be really hard for me to convince myself that I made something that I didn’t.  In general, results obtained in the hard sciences seem to me to be basically trustworthy.

Last, and this is the optimistic, future-looking conclusion I want to leave you with: Scientists are not blind to all these problems.  There seems to be a general awareness that researcher allegiance bias, p-hacking, etc. are things to be dealt with, and there has been plenty of discussion in the media on how to improve each aspect of science that I talked about.  Organizations have popped up that specifically focus on improving scientific practice.  Remember John “most-published-research-findings-are-false” Ioannidis?  He co-founded the Meta-Research Innovation Center at Stanford (METRICS) in 2014, which will aim to improve the quality of scientific research through conducting meta-science research, among other things.  And Brian “40%-of-psychology-experiments-don’t-replicate” Nosek co-founded the Center for Open Science in 2013, which has a similar mission and is currently carrying out a large-scale reproducibility project on studies in cancer biology.

So I would like to think that meta-science has momentum.  Science may have its problems, but scientists recognize these problems and are hard at work trying to fix them.  And if there’s any group of people I would trust to solve problems like these, it’s scientists.  After all, these people are ambitious, talented, insanely smart, curious, data-driven, truth-hungry.  If you tell them there’s an obstacle between them and the truth about reality, you better believe that obstacle is going to come crashing down.

Biases and flaws may be viruses infecting science right now, but those viruses are soon going to find out that science has got one hell of an immune system.  And as far as infections go, this is no more than a common cold; give science a little time, and it’ll come back stronger than ever.  You think you’re going to keep science bedridden for long?  Come onnnnn.

I look forward to the day that science has put the proper institutions and practices in place so that it’s functioning at 100%.  Because then…look out, world.  Science is on the move, and it’s got truth to find.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s