Problems with p-hacking are by no means exclusive to Wansink. Many scientists receive only cursory training in statistics, and even that training is sometimes dubious. This is disconcerting, because statistics provide the backbone of pretty much any research looking at humans, as well as a lot of research that doesn’t. If a researcher is trying to tell whether changing something (like the story someone reads in a psychology experiment, or the drug someone takes in a pharmaceutical trial) causes different outcomes, they need statistics. If they want to detect a difference between groups, they need statistics. And if they want to tease out whether one thing could cause another, they need statistics.
The replication crisis in psychology has been drawing attention to this and other problems in the field. But problems with statistics extends far beyond just psychology, and the conversation about open science hasn’t reached everyone yet. Nicholas Brown, one of the researchers scrutinizing Wansink’s research output, told Ars that “people who work in fields that are kind of on the periphery of social psychology, like sports psychology, business studies, consumer psychology… have told me that most of their colleagues aren’t even aware there’s a problem yet.”
I think the hockey stick episode shows that this is a problem with climate research as well.
The point of peer review has always been for fellow scientists to judge whether a paper is of reasonable quality; reviewers aren’t expected to perform an independent analysis of the data.
“Historically, we have not asked peer reviewers to check the statistics,” Brown says. “Perhaps if they were [expected to], they’d be asking for the data set more often.” In fact, without open data—something that’s historically been hit-or-miss—it would be impossible for peer reviewers to validate any numbers.
Peer review is often taken to be a seal of approval on research, but it’s actually more like a small or large quality boost, depending on the reviewers and scientific journal in question. “In general, it still has a good influence on the quality of the literature,” van der Zee said to Ars. But “it’s a wildly human process, and it is extremely capricious,” Heathers points out.
There’s also the question of what’s actually feasible for people. Peer review is unpaid work, Kirschner emphasizes, usually done by researchers on top of their existing heavy workloads, often outside of work hours. That often makes devoting the time and effort needed to catch dodgy statistics impossible. But Heathers and van der Zee both point to a possible generational difference: with better tools and a new wave of scientists who aren’t being asked to change long-held habits, better peer reviews could conceivably start to emerge. Although if change is going to happen, it’s going to be slow; as Heathers points out, “academia can be glacial.”
“Peer review” is worse than useless at this point, I think. And it’s often wielded as a cudgel against dissidents of the climate religion.
There is simply no reason for research to be published in journals anymore. Publish in a blog, and let everyone see and comment on your work. The entire world becomes the review committee.
Then you need to review the reviewers to identify those who knows their stuff and isn’t just blowing smoke. Could make a social network out of it.