Wednesday, August 31, 2005

"Why Most Published Research Findings Are False"

From the same guy who brought us a study showing that one-third of clinical studies are later contradicted by larger studies (excellent summary by Orac here), we now have a rather alarmingly titled paper arguing that more than 50% of all scientific published papers are wrong. New Scientist summarizes:
Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.

John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
Ironically, the award system in science means that the first person gets all the credit, while there's no glory in repeating the result, and only a little more in refuting it. Inevitably, this means that "hot" fields with many competing labs are particularly susceptible to false published results, as there is both intense pressure to publish first (no one wants to be scooped!) and a strong bias for pursuing (and publishing) only positive results. This has been true, most notably, of recent research in stem cells.

Eventually, things do get sorted out; results must be repeated at some point (if for no other reason than that you often need to build on previous protocols to get to the next step in a research program). But all of this means that one should take new results with a healthy dose of skepticism.

This news will be a two-edged sword in terms of psychological effects on practicing scientists: on the one hand, obviously it's depressing that if you discover something, odds are it will turn out to be false. On the other hand, you can probably still get a paper out of it even so.

[Via Crumb Trail.]

Update, 2 September: Alex Tabarrok has a nice clear explanation of why, statistically speaking, one would expect most research findings to be false.

9 Comments:

Blogger Chris said...

I'll be honest, this doesn't bother me. I mean, in a fledgling science like mine (cog sci), being wrong is just part of the game. For us, a finding that lasts 20 years is extremely rare, and probably either very general or incredibly specific. But research that is wrong drives a lot more research. Sure, there's no fame in showing that earlier findings were incorrect (unless it's a really big finding -- the person who demonstrates that human children can learn language from the statistical information in the environment will be famous), but when a finding is wrong, it almost always makes people who think it's wrong do better, more rigorously controlled research to show that it is. And that's good for science. Sure, a bad experiment or two will slip through the cracks now and then, and not be contradicted for some time, but that's the price you pay for using statistics, and for doing necessary research even in areas where things are messy. And all of the cognitive scientists I know take pretty much everything with a healthy dose of skepticism, unless their names are Buss, Lakoff, or Cosmides.

I guess it's a bit more dangerous in medicine, but I think the success of medical science speaks for itself, so I'm not terribly worried about that either.

8/31/2005 08:07:00 AM  
Blogger Andrew said...

Yea, I think that's the right attitude to take - the guy in the New Scientist article basically summed it up saying "well any good scientist should already be skeptical of any single new paper, so this is no big deal, it's just common scientific sense."

8/31/2005 09:35:00 AM  
Blogger driftwood said...

It also helps to remember that the peer reviewed journals are not the only conversation in science. Scientists all have their own opinions and they sometimes tend to disbelieve new results that don’t fit their view. So even if they don’t publish their doubts, they are sure to tell anyone who will listen what they think is wrong with the result. This at least means that people are aware of possible holes in the new work.

8/31/2005 05:09:00 PM  
Blogger Andrew said...

That's true as well. I was going to point out that sometimes the scientific consensus is actually wrong and no one notices the hidden flaw that's producing the false result, but I think Ioannidis isn't going after something as profound as Kuhn's normal science and paradigm shifts, but just the everyday trial and error nature of science.

As always, the virtue of science isn't that it's never wrong, it's that usually it can eventually spot and correct its errors.

9/02/2005 12:23:00 AM  
Blogger Ma Tiny said...

did he predict the true/false rate by journal? the top tier journals have higher standards, and one would expect a better validation rate on the results published in science, nature, and cell. there's a reason those journals are respected.

9/07/2005 06:14:00 PM  
Blogger Andrew said...

You know, actually I would almost predict the opposite. Or at the very least, there are two factors working in opposite directions. One is that, as you say, prestigious journals have higher standards and demand more rigorous evidence. But on the other hand, prestigious journals also want articles in "sexy" fields, that break big new ground, that report surprising findings, that represent an advance in a field that lots of people are working in. All of these factors are things that would make a finding less likely to be true, as I noted in the post: when lots and lots of people are working in a field, under time pressure to publish first and to find "positive" results, it becomes more likely that false hypotheses will be found to be true.

But anyway, as it happens, he didn't predict a true/false rate by journal.

9/07/2005 08:40:00 PM  
Anonymous Research Paper said...

Many institutions limit access to their online information. Making this information available will be an asset to all.

4/20/2010 08:30:00 AM  
Anonymous Anonymous said...

The latter, Web 2.0, is not defined as a static architecture. Web 2.0 can be generally characterized as a common set of architecture and design patterns, which can be implemented in multiple contexts. bu sitede en saglam pornolar izlenir.The list of common patterns includes the Mashup, Collaboration-Participation, Software as a Service (SaaS), Semantic Tagging (folksonomy), and Rich User Experience (also known as Rich Internet Application) patterns among others. These are augmented with themes for software architects such as trusting your users and harnessing collective intelligence. Most Web 2.0 architecture patterns rely on Service Oriented Architecture in order to function

11/03/2010 01:23:00 PM  
Anonymous James Robles said...

Hi thankss for sharing this

7/15/2024 03:46:00 AM  

Post a Comment

<< Home