GreenTechSupport GTS 井上創学館 IESSGK

GreenTechSupport News from IESSGK

news20090915nn2

2009-09-15 11:42:20 | Weblog
[naturenews] from [nature.com]

[naturenews]
Published online 14 September 2009 | Nature | doi:10.1038/news.2009.914
News
Sneak test shows positive-paper bias
Reviewers keener to give thumbs up to papers with positive results

By Nicola Jones

VANCOUVER

{Reviewers were more critical of no-difference papers than positive papers.
GETTYT}

he bias towards positive results in journal publications has been confirmed through a cunning experiment.

Seth Leopold of the University of Washington, Seattle, composed two versions of a fake paper comparing the relative benefits of two antibiotic treatments. They were identical except for one critical difference: one paper found that one treatment was better than the other, while the other found no difference between the two. Reviewers were far more likely to recommend the positive result for publication, Leopold and his colleagues found. Worse, reviewers graded the identical 'methods' section as better in the positive paper, and were more likely to find sneakily hidden errors in the 'no-difference' paper, presumably because they were feeling more negative and critical about the latter work.

"That's a major problem for evidence-based medicine," says Leopold, who presented the work on 11 September at the Sixth International Congress of Peer Review and Biomedical Publication in Vancouver, British Columbia. Such a bias can skew the medical literature towards good reviews of drugs, affecting consensus statements on recommended treatments. "We should be more critical of positive studies," he says.

Wanting to believe

Previous studies have hinted at a 'positive outcome bias', just from the sheer number of papers that get published with positive versus 'no-difference' results. But it wasn't clear if there were some other aspects about 'no-difference' papers, such as methodological problems or a lack of impact, that might make editors turn up their noses. Leopold's study is the first experiment to attempt to pin this down.

"It just goes to show that peer review is done by biased, subjective people," says Liz Wager, managing director of the Sideview consultancy in Princes Risborough, UK, and chair of the UK-based Committee on Publication Ethics. "Everyone wants the new stuff to work — they want to believe."

{“It just goes to show that peer review is done by biased, subjective people.”
Liz Wager
Committee on Publication Ethics}

The two imaginary studies were of very high quality, conforming to all good standards of research, involving multiple study centres and oodles of good data. "It's easy to make such a study if you don't have to actually do it," Leopold jokes. They compared two strategies of antibiotic treatment for surgery patients — a single dose of drugs before surgery compared with a starter dose plus a 24-hour follow up of drugs. The relative benefit of these strategies is under debate by clinicians, so both a positive and a negative result should have equal impact on patient care — both should have been equally interesting.

But when more than 100 reviewers at the American edition of Journal of Bone and Joint Surgery (JBJS) were given one of the papers to assess, 98% of reviewers recommended the positive-result paper for publication, while only 71% recommended the nearly identical 'no-difference' paper. Strikingly, these reviewers also gave the entirely identical methods section a full point advantage (on a scale of one to ten) in the positive paper. "There's no good explanation for that," says Leopold. "That's dirty pool."

Error catchers

Five intentional small errors were sneaked into the papers, such as having slightly different numbers in a table compared with the text. Reviewers at the JBJS caught only an average of 0.3 errors per reviewer in the positive paper, but perked up their critical faculties to catch 0.7 errors per reviewer in the 'no difference' paper.

Another 100 reviewers at the journal Clinical Orthopedics and Related Research were similarly affected in their judgement, but not to a statistically significant degree. This might partly be because these reviewers guessed they were part of an experiment, Leopold says — this journal tells reviewers that they are number 'x' reviewer on a paper, and once that number goes past '5' or so it starts to look very suspicious.

Some have hypothesized that positive-result bias might come from researchers deciding not to bother submitting 'no-difference' results. This study shows that peer reviewers are probably playing a role too, says Leopold. "We have reason to suspect this is true across all specialties," he says.

最新の画像もっと見る

post a comment