"Most people think peer review is some infallible system for evaluating knowledge. It's not. Here's what peer review does not do: it does not try to verify the accuracy of the content. They do not have access to the raw data. They don't re-run the statistical calculations to see if they're correct. They don't look up references to see if they are appropriate/accurate."Couldn't agree more with the Last Psychiatrist. We just assume the raw data are accurate. Every study likely contains some small data entry or calculation errors, but what if the whole paper is based on a significant misrepresentation of the raw data? Wouldn't that be a, large problem? What is reported and what is not reported? To put it in layman's terms, anybody can make up whatever the hell they want, and the peer reviewers are under the assumption that it is true. We're working on the honor system here and who knows how often the final paper reflects the real data, or if we are dealing with undisclosed errors due to sloppiness, an accident, greed, or just wanting to cover up the bad news, like in the following...
There is no way that even the world's greatest peer reviewer would catch this, as without access to raw data, we're trusting that relevant information is presented in the manuscript. Reviewers might catch an obvious statistical error, but they sometimes miss the most blatant errors, such as a paper that makes an important conclusion based on no evidence whatsoever.
What do peer reviewers do?
Again, quoting from The Last Psychiatrist...
They look for study "importance" and "relevance." You know what that means? Nothing. It means four people think the article is important. Imagine your four doctor friends were exclusively responsible for deciding what you read in journals. Better example: imagine the four members of the current Administration "peer reviewed" news stories for the NY Times.
No, I'm not claiming I don't have my own bias. Duh. You can see the cards I'm holding pretty clearly if you read this site with much regularity. The point is that peer reviewers need to realize their bias and take a better, more objective look at research they are reviewing. Too many industry-cheerleading pieces in journals leads to uncritical acceptance of treatments that nearly always fail to live up to their initial hype. After all, after a few trials have been published (even if poorly done and/or overstating efficacy, understating risks), the drugs are now based on "science," which leads to yet more marketing. Check the actual track record of benzos, SSRIs, Depakote, and atypical antipsychotics if you doubt me. Each treatment "revolution" is closely linked to peer review. So if you are pleased as punch with the current state of affairs in mental health treatments, then please make sure to send letters to your favorite medical journal editors thanking them for the present system. Don't let it change.
Or maybe the whole system needs a fundamental overhaul. More on that later.
Promo Time. I'll take yet another lead from the LP and humbly suggest that you promote this post via Digg, Reddit, or any other favorite service. I'd even more strongly suggest you hit up the LP's post and promote it. While you're sharing posts with the world, you should you read my take on SSRI's, Suicide and Dunce Journalism and send it to all of your friends. And while I'm in promotion mode, give your money to Philip Dawdy if you like good journalism on mental health issues. If you want to pledge money to support the operations of this unpaid anonymous blogger (and you know you do!), thanks, but take it and give it to Philip. Now!