Meta-Science 101, Part 1: An introduction

significant

(Comic taken from xkcd, here)

Science is in dire straits.

As you may know, a white man in a position of power has recently been criticizing science relentlessly, and it seems like that will continue to be the norm for at least the next four years.  His actions seem informed by a worldview in which science is mostly false or useless, provoking strong reactions from scientists worldwide.  Most troubling of all, he’s just getting started; there’s no telling what he’ll do next, emboldened by his newly acquired institutional power.

You all know who I’m talking about.  That’s right: John Ioannidis.

Uh, who?

Contrary to the misleadingly-worded intro paragraph, John Ioannidis is not someone out for the blood of scientists; rather, he’s a Stanford professor who’s played a huge role in bringing to light a multitude of problems entrenched in modern scientific practice, and he’s dedicated his career to figuring out how to fix these problems.  And he’s not alone; Ioannidis is just one representative of a larger movement in the scientific community, which has become more self-critical and introspective in recent years.  If you’ve heard talk of “p-hacking” or “the replication crisis” recently, this is why.

This post is an attempt to synthesize all the problems in science that have surfaced as a result of this scientific self-reflection.  If we can call this movement meta-science, then welcome to Meta-Science 101: a whirlwind tour through the biases and flaws that currently plague science, and the result of the past month or so of me diving into this topic.

Let’s start off with how science is supposed to work.  You’re a scientist, and you have a hypothesis.  You test the hypothesis with a well-designed experiment, and your results come back; these results give you an insight into how the world works.  Without any fudging of data, you publish your result in a journal, and await the results of peer review.  The scientific community judges whether or not your work is up to snuff, and the work gets published or not, accordingly.  If your work is published, it gets replicated by another lab, confirming your original result.  Rinse and repeat across millions of scientists practicing worldwide, shower in the output of true knowledge generated.

Now let’s take a look at how science too often actually works.  You’re a scientist, and you have a hypothesis.  You really like said hypothesis, so you design an experiment to test it.  The results come back, but they’re not super clear, so you interpret it various different ways until you find a positive, statistically significant result in favor of some variant of your original hypothesis.  Researchers in your field receive your manuscript for peer review, and make semi-arbitrary recommendations to the editor.  In the end, you publish, so your boss is pleased, and more importantly the grant committee funding your boss’s proposals is pleased, guaranteeing you funding for a while.  Phew!  Good thing you got that positive result.

Meanwhile, alternate-universe-you that didn’t find a statistically significant result doesn’t publish.  The results sit in a file drawer.

But anyway, this-universe-you is happy that you published.  Your work never gets replicated, but if it was, the replication might not have confirmed your finding.

…There’s a ton of stuff packed into those last three paragraphs, so we’ll spend the rest of the post elaborating point by point.  First up, confirmation bias.


Continue to Part 2: Confirming confirmation bias >>>

Advertisements