Scientific research findings that are probably wrong gain far more attention than robust results, according to academics who suspect that the bar for publication may be lower for papers with grabbier conclusions.

Studies in top science, psychology and economics journals that fail to hold up when others repeat them are cited, on average, more than 100 times as often in follow-up papers than work that stands the test of time.

The finding – which is itself not exempt from the need for scrutiny – has led the authors to suspect that more interesting papers are waved through more easily by reviewers and journal editors and, once published, attract more attention.

“It could be wasting time and resources,” said Dr Marta Serra-Garcia, who studies behavioural and experimental economics at the University of California in San Diego. “But we can’t conclude that something is true or not based on one study and one replication.” What is needed, she said, is a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed.

The study in Science Advances is the latest to highlight the “replication crisis” where results, mostly in social science and medicine, fail to hold up when other researchers try to repeat experiments. Following an influential paper in 2005 titled Why most published research findings are false, three major projects have found replication rates as low as 39% in psychology journals, 61% in economics journals, and 62% in social science studies published in the Nature and Science, two of the most prestigious journals in the world.

Working with Uri Gneezy, a professor of behavioural economics at UCSD, Serra-Garcia analysed how often studies in the three major replication projects were cited in later research papers. Studies that failed replication accrued, on average, 153 more citations in the period examined than those whose results held up. For the social science studies published in Science and Nature, those that failed replication typically gained 300 more citations than those that held up. Only 12% of the citations acknowledged that replication projects had failed to confirm the relevant findings.

The academic system incentivises journals and researchers to publish exciting findings, and citations are taken into account for promotion and tenure. But history suggests that the more dramatic the results, the more likely they are to be wrong. Dr Serra-Garcia said publishing the name of the overseeing editor on journal papers might help to improve the situation.

Prof Gary King, a political scientist at Harvard University, said the latest findings may be good news. He wants researchers to focus their efforts on claims that are subject to disagreement, so that they can gather more data and figure out the truth. “In some ways, then, we should regard the results of this interesting article as great news for the health of the scholarly community,” he said.

Prof Brian Nosek at the University of Virginia, who runs the Open Science Collaboration to assess reproducibility in psychology research, urged caution. “We presume that science is self-correcting. By that we mean that errors will happen regularly, but science roots out and removes those errors in the ongoing dialogue among scientists conducting, reporting, and citing each others research. If more replicable findings are less likely to be cited, it could suggest that science isn’t just failing to self-correct; it might be going in the wrong direction.’

“The evidence is not sufficient to draw such a conclusion, but it should get our attention and inspire us to look more closely at how the social systems of science foster self-correction and how they can be improved,” he added.

Share:

administrator