…by a Nobel Prize winner:
…leading scientists know that the “prestige” academic journals are biased in favor of flashy and politically correct research findings, even when such findings are frequently contradicted by subsequent research. This is important in the context of the global warming debate because Nature and Science have published the most alarmist and incredible junk on global warming and refuse to publish skeptics. (Full disclosure: Nature ran a negative editorial about us a few years back and a much better but still inaccurate feature story.) Claims of a “scientific consensus” rely heavily on the assumption that expertise can be measured by how often a scientist appears in one of these journals. Now we know that’s a lie.
This was one of the revelations of Climaquiddick, that the warm mongers continue to try to paper over.
Read the Schekman’s actual op-ed. It’s about the harmful (in his opinion) incentives to publish in a handful of “luxury” journals rather than cheaper open-access journals (also peer-reviewed) like the one he edits. His argument has nothing to do with political correctness, climate science, or using peer review to blackball skeptics (in fact he criticizes the luxury journals for favoring papers that make controversial claims).
Peer review journals across all fields have been having serious quality control issues. They are not the gold standard they are propagandized as.
There are a lot of problems in science these days. And they go much deeper than the predictable politicization of research and results.
The irreproducibility of apparently statistically significant results is a critical issue that nobody seems able to grapple with effectively, from what I can tell. Sometimes, it’s accidental–you just get fluke results. But how do you know it’s an accident unless you redo your entire study with new samples, perhaps multiple times? There was a good introduction to the problem in an article in New Yorker magazine in 2010; just search for New Yorker and scientific method.
The over-reliance on a few prestigious journals, which was the crux of the issue in this case, is another issue. There is a tacit assumption that those journals are the gold standard for research quality. But those journals seemingly sift for impact (as defined by what the editors think is scientifically and politically important) and not really quality. In addition, their short articles mean that methods are usually not fully described (though the online supplements are a helpful response). Authors are simply not required to disclose their methods fully (cough hockey stick cough) and so peer review cannot function properly, yet we tacitly assume that since the paper is in Nature or Science or Cell that he work must be of impeccable quality.
I had my own dust-up with Science many years ago over their gate-keeping, impact-seeking policies, and ended with a letter to them reminding them that there are other places to publish, and I’d use those in the future. The impact sifting is clearly not a new issue.
And peer review itself is a misnomer. Modern research projects can take years. Most peer review takes a few weeks, and that’s not 24-7 kind of work on the review. Referees (that is, peer reviewers) can’t be expected to check everything, essentially redoing the entire analysis. Generally, there’s not enough information given to really duplicate the analysis anyway. And sometimes, like in the case of numerical models, you’d have to go through codes line by line. Just shoot me, please, it’s kinder.
I could go on, but you get the idea.
Jeff