Skeptoid PodcastSkeptoid on Facebook   Skeptoid on Twitter   Skeptoid on Stitcher   iTunes   Google Play

Members Portal

Store

 

Get a Free Book

 

SKEPTOID BLOG:

The Very Basics of Peer Review

by Martine O'Callaghan

June 6, 2013

Share Tweet Reddit

Donate Just a few weeks ago, I wrote an article about the role of peer review in the MMR causes autism scandal kick started by Andrew Wakefield's fraudulent 1998 study. In my other guise of Autismum I'm often locking horns with the pro-disease lobby or autism curebies who seem to see little value in peer review and have little understanding of the system at all. I know, I know: the standard advice is not to feed trolls but, sometimes, letting nonsense go unchallenged just can't be done. So, if what follows is a bit basic I apologise.

Typical pop culture representations of scientists show them as male, lone crusaders staring down microscopes or drip dropping one garishly coloured liquid from one test tube into another. In reality, most science is a team effort and the number of hours researchers spend 'doing' science can be equalled by the time it takes to analyse the data collected and get the whole lot written up as a coherent study. Once the write up is complete, it is handed over to a publisher. The authors' hopes of sharing the knowledge they have accumulated, furthering a group or individual's reputation or just keeping their hand in and relieving the pressure from above, the very fate of all that hard work is in somebody else's hands.

On submission to a scientific journal, a research paper first goes to the publication's editorial team. They ensure it fits within the scope of their particular journal, that it's interesting and they hadn't published one just like it an issue or two ago. If the team approve it, 'the peers' are next to get their paws on the paper. Two or more, external reviewers, with a history of published studies of their own make a detailed appraisal of the work. If they think the submission is any good, with the wind in the right direction and the moon being in the most auspicious phase of its cycle, they will recommend it gets published. The ultimate decision on whether a study is published or perishes, however, lies solely with the journal's editor.

The key points considered by peer reviewers


Apparent validity: the results bear some relation to the study's stated aims and claims. Do the results of the study justify its conclusions? Of course, to ensure all the findings are valid would mean repeating the study all over again and that's just not practical for reviewers to do!;

Significance: that the work adds something new to or improves the understanding of a given subject;

Quality: Is the methodology suitable to find answers to the questions posed? Are there suitable controls? Is the sample size adequate? Are the methods for analysing the data suitable? Has the data been interpreted correctly...have they been over interpreted?...and a hundred and one other questions.

Originality: Is this a new take on an old question? Is this a whole new question that no-one's thought to ask before? On occasions, the quest for the novel seems to trump all other considerations and poor quality studies get published in otherwise good journals, often to generate publicity and thus garner new readers. When assessing a piece of work it is neccessary to ask, is it original or simply controversial?

Science in the news


When a science story appears in the press, it's handy to know if it has been published in a journal that operates a peer review system. An entry on PubMed is not necessarily a sign of quality or having been through the peer review process. If, however, on inspection, the work some journalist has got so excited about he simply had to spread the word, has been peer reviewed that does not automatically mean it is a good study or that its findings are valid.

It's important to remember, all journals are not created equal. There are many reputable journals who operate robust peer review but even they can fall foul of their stated aims and publish work for the sake of publicity. Perhaps the most famous example of this is with the furthest reaching consequences is the 1998 publication in the British journal, the Lancet, of Andrew Wakefield's study that sparked the fear that the combined measles, mumps and rubella, MMR, vaccine could cause autism. This is not an isolated case though. Even the Best Publish Nonsense contends Mark Crislip, and infectious disease specialist:
Then the Annals of Internal Medicine had their absolutely ghastly series on SCAMS...Since that series of articles, I have doubt whenever I read an Annals article. When a previously respected journal panders completely to woo, they lose all respectability.
and on the New England Journal of Medicine publishing on acupuncture:
Can you believe this? From the NEJM! Such total tripe. I rely on the NEJM to provide reviews of relevant medical topics as, outside of ID and quackery, I do not have the time to read the primary literature...The NEJM has lost some of its credibility. I doubt they will ever get it back.
The public health crisis and the on-going consequences of the decision to publish, the now retracted, Wakefield paper prompted the British government to examine the peer review process. Their publication, Peer Review in Scientific Publications — House of Commons Science & Technology Committee, July 2011, calls for bodies to be set up to review scientists' work even before submission for publication. Many people, such as the journalist Brian Deer, who exposed the Wakefield fraud, think this is a good way forward. However few scientists agree. It is difficult to see how such measures could prevent a tenacious researcher bent on fraud from being discovered, some scientists regard the suggestion that fraud on the scale of Wakefield's being endemic in science as a, "Seriously cheap shot."

by Martine O'Callaghan

Share Tweet Reddit

@Skeptoid Media, a 501(c)(3) nonprofit

 

 

 

Donate