Meta-Science 101, Part 5: Academic funding and the pressure to publish

“In the end, you publish…guaranteeing you funding for a while.”

“Publish or perish” has become a sort of sad mantra for academics.  All across the world, university faculty are feeling more and more pressure to push out publications.  Without publications, you lose your funding.  Without funding, you lose your ability to pay graduate students and postdocs.  And without graduate students and postdocs, you lose your ability to do good science, which can lead to less publication output, starting the cycle over again.  If you’re an established professor, then this means loss of status and personal fulfillment; if you’re pre-tenure, this means you’re probably out of a job soon.  This is an outline of a career path that no academic wants to follow, so you can see why everyone in academia is so determined to publish, publish, publish.

But why does it feel like things have gotten worse recently?  What’s led academia to this hypercompetitive state?

I came into writing this section thinking that a lack of funding was the reason.  If there’s less money to go around, then in order to win grants, academics will have to publish more to separate themselves from the crowd, leading to the “publish or perish” mindset.  In support of this view, federal science funding as a percentage of GDP has steadily declined for decades now.

But take a look at absolute university science funding over the past 40 years (inflation-adjusted):

fig1

(Graph taken from NSF website, here)

What you see is a gradual increase in funding up until 2011, when you start to see a decline.  But the decline doesn’t look like that much in the grand scheme of things; it can perhaps more accurately be called a plateau.  The fact that federal science funding, as a percentage of GDP, has been declining has more to do with the rise in GDP than with any sort of drastic cuts in science funding.  So while plateauing science funding certainly doesn’t help matters, I don’t think it can fully account for the academic funding crisis we see.

Now look at the rise in grant applications to the NIH over the past 15 years:

dnc-5x62o7oq

dnc-13l22k58

(Figure taken from NIH website, here)

Application rates have more than doubled since 1998 (going from 24,000 to 52,000), and the decline in success rate roughly tracks that, dropping from 31% in 1998 to 18% now.  The situation at the NSF is the same: Grant applications have risen from 28,000 in 1998 to 50,000 now, and the success rate has correspondingly dropped from 33% to 24%.

So grant award rates have dropped despite an absolute increase in university science funding, due to the massive increase in grant applications.  Putting on my economist hat, the current academic funding crisis is not a supply-side problem, it’s a demand-side one.

There are interesting ideas on causes and solutions, but for the purposes of this post, all we have to know is that it’s happening.  More and more scientists are being trained, resulting in hypercompetition and the “publish or perish” mentality.  What are the consequences?

Well, the academic funding crisis has a hand in pretty much every other problem in science covered in this post so far: Researcher allegiance bias, p-hacking, and the quality of peer review, and the yet-to-be-discussed publication bias and replication crisis.  If researchers need publications to get funding, then they’ll be more likely to interpret their findings in a positive way, e.g. via p-hacking; this manifests itself in researcher allegiance bias.  If the pressure to publish is high, peer reviewing becomes less and less of a priority; when PI’s finally get to doing a peer review, they go through it too quickly, letting unconscious biases creep in.

To be fair, this is a bit speculative; I don’t know of any study that explicitly links publication pressure to specifically any one of these things.  But scientists themselves seem to be cognizant of the pressures and their likely consequences, and are speaking out against them.  And the line between incentive and behavior here is direct enough that I’d be surprised if publication pressure weren’t affecting these things to some extent.

Plus, we do have this study, which showed that authors were more likely to report positive results in US states with a higher number of publications per-doctorate-holder-in-academia (which the authors used as a proxy measure for how much pressure there was to publish in a given state).  If you’re an average academic researcher in Washington D.C., where you’re pressured to publish about 0.9 publications/year, then you’re about 4 times as likely to find a positive result in each publication than if you’re in North Dakota, where the standard is only 0.4 publications/year.

journal-pone-0010271-g002

(Fig. 2 from the paper)

Of course, correlation doesn’t imply causation; you always have to be wary of confounding factors in correlational studies.  The most obvious response to these findings is to say, “Well, duh.  Researchers at more prestigious institutions publish more papers.  And they’re better at science, so of course they’re going to find more positive results.”

The authors have two replies.  Firstly, they note that, when they controlled for R&D expenditure–which you’d expect would be higher for better institutions–their finding didn’t go away.  If anything, it got more statistically significant.  This counts as evidence against the fact that institutional prestige is a confounding variable here.

Their second reply is to look at a state like Michigan, where the likelihood of finding a positive result was something like, um, 97%.  Either Michigan researchers are inflating their results in some way, or they’re just really, really good at science.  In which case, I’ll have what they’re having.

The study didn’t try to delve into what was behind this correlation, but they suspect a large part of it is due to scientists, under pressure to publish, interpreting their negative results into positive results à la p-hacking–not because the scientists involved have malicious intent, but rather because they just know there’s a positive result there, they just have to find the right interpretation and the data will show it.  They also speculate that the correlation can be partly be explained by the selective publication of positive studies over negative studies, also known as publication bias.  Which brings me to my next trick…


Continue to Part 6: Publication bias >>>

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s