I would think that the concept of peer review just took a big credibility hit, too, or should, now that we know how it really works.
26 thoughts on ““Peer-Reviewed” “Science””
Comments are closed.
I would think that the concept of peer review just took a big credibility hit, too, or should, now that we know how it really works.
Comments are closed.
Suggest a better system than peer review.
Open source.
Open source.
Yup. The blogosphere is already making short work of these charlatans, now that we’ve had a peek behind the curtain.
Other than how it is overseen, what substantive differences are there between peer review (as a generic concept) and what is happening right now in the blogosphere? IOW aren’t you really picking at the current implementation of peer review in a particular discipline, rather than the concept?
“Peer-Reviewed”
Isn’t that shorthand for “you scratch my back and I’ll scratch yours”?
It’s not “peer review” per se that’s the problem. It’s gate-keeping journal elites that are the problem. Luckily publishing direct to the Internet is the solution to all gate-keeping problems (that are not legislatively enacted).
Anyone who wants to see where this is going should read up on Open Data. Put the data in the open, and let 1,000 analysts bloom.
Other than how it is overseen, what substantive differences are there between peer review (as a generic concept) and what is happening right now in the blogosphere?
How it is overseen is a pretty substantive difference. It’s called “transparency,” something that has been missing from the AGW debate until the information was freed…
One of the reasons that I didn’t become a scientist (although I have the training for it) is that I think the peer review system stinks.
It’s a positive feedback mechanism that depends on the good intentions of all involved in order to work, and depends on everyone ignoring the surrounding reward structures. For those not in the know, positive feedback loops are usually very bad. In other words, the current peer review system is a mechanism that forces power plays until one group manages to impose its views on all the others. Succeeding at this is the most optimal winning solution for the “game”, regardless of the veracity of the views being imposed.
There is a reason why the expression “Science progresses one death at a time” exists.
I do have some ideas on how to fix this, and the solution is partially technical, and mostly one of structure. I won’t go into my actual solution, but I have some thoughts on the matter which might interest others:
1. Repeatability. If nobody can replicate your results, then it’s not science. This means each “paper” released has an associated online version control system that contains all versions of the paper itself, all supporting processed data, all supporting raw data, and all supporting code.
2. Verifiability. Any supporting information should be included with an eye towards building a case. There should be a version control system explicitly for raw data, or perhaps an entirely separate “paper” and review system for raw data. It’s much easier to verify evidence if you can see the entire timeline of its addition. Repeating an experiment can be hard or expensive, so verifiability needs to take up some of the slack.
3. A falsifiable theory. Any “paper” should explicitly include a falsifiable theory, even if it is merely a cite to an existing paper.
4. A reward structure that does not reward correctness in the theory, but does reward correctness in methodology. Science is not about being “right” in your theory. It is about carefully collecting data to falsify theories, and falsifying theories. Falsifying existing theories should be rewarded, not submitting theories.
5. Finally, no group of individuals should ever be able to prevent the publishing of a “paper”. They should only be able to publicly refute it with their own data, or a refutation that cites data already in the system, and provides a methodology for producing a replicable refutation.
6. The system should explicitly reward the presentation of new verifiable data, and the falsifying of existing theories and data. Nothing else should provide rewards that exceed those.
Points 4 and 5 above are related to the idea that a theory is only as good as the single most trenchant counterexample presented by other scientists with verifiable and repeatable data and methods. One good refutation is all you need.
I think the days of a journal being a method for the dissemination of information are done. We have an Internet now. However, we badly need a people-proof review system that concentrates and organises data, refutations of theories, theories, and their provenance.
My understanding is that the people in question weren’t really working on Climate Models as I understand them, i.e. GCM’s or Global Circulation Models. Those are the things that predict that it will get much warmer in the next 50 years. I also believe those things are massive pieces of simulation code, and perhaps would benefit from the 1000 eyes principle.
I believe that the CRU people were working on statistical “models” for piecing together temperature data from various places and sources to produce things like the “Hockey Stick” graph. This speaks to “historical climate” and claims of “the later part of the 20’th century being the warmest on record.” Some comments floating around are that the Fortran codes for this are a kludgarama and wouldn’t pass into production code in any other environment (but I guess they don’t work at Microsoft ! — cue rim shot). What seems to be amazing about these stitched-up computer codes is that they so reliably generate . . . Hockey Sticks.
These recent revelations then don’t speak to the “Computer Models” everyone is worrying about with respect to forecasting Bad Things in the near future, but they speak to the business of reconstructing temperature records to demonstrate that warming is already happening, that the Earth has a fever as it were.
Regardless of what Chris or others will say, I think these recent revelations effectively shoot-to-heck all of the historical-temperatures-and-Hockey-Stick work as being done by people who don’t “keep their stick on the ice” as it were.
The historical temperature reconstructions have always been suspect — the late John Daly being especially critical of urban heat island effects in thermometer data. If one is going to make claims from such temperature data, one has to be beyond reproach in the manner of Caeser’s wife as they say in not attempting to “hide the decline” by fudging the data. It seems that an entire chocolate factory has been uncovered.
No amount of “blame the right wing and oil companies” is bringing credibility back to the cottage industry of historical temperatures and proxies for temperature. There is a lot more to the “Global Warming Industry” than these temperature records, but I can’t see how this one leg of the AGW platform can be restored to credibility.
No, open source is right. It’s very different from peer-review. After a peer-reviewed article appears in a journal, people still write in letters about it, just like any other magazine where people disagree with the articles. However, your comment has just as much (or less) chance of being published as a letter to the editor to the NYT. And the orginal article never references it, and it’s usually lost as part of the record (if it ever gets in there to begin with).
Imagine if all of those were accepted and became a conversation? Right now, the process of getting a comment accepted and published is, um, byzantine, I’ll say. There’s a very excellent post up showing a *real life* example that is, sadly, not anything out of the ordinary.
http://www.scribd.com/doc/18773744/How-to-Publish-a-Scientific-Comment-in-1-2-3-Easy-Steps
Open source would be amazing. And the fact you have to pay crazy amounts of money to subscribe to the journals to read the results of tax-supported research…. well. Yeah.
Do hackers count as “peers?”
So much of what we depend upon to be “science” may turn out to be nothing more than the old politics as usual tricksters at work. This week the data that formed the support of the hockeystick version of climate change ran into trouble. It seems the “peer reviewers” missed some important problems with the data.
I’m a normal guy, but it seems “peer review” SHOULD be an equivalent of “open source.”
Reading the timeline of JUST how this info came out, basically WAS peer review. Someone, who knew just where to post, just what they got, leaving it up to others to decide just what it was that was there.
Whoever this was, in my opinion, HAD to be a “Whistleblower” rather than a “hacker.” That’s a LOT of data, and all very specific, apparently set up to offer context as well as gotcha’s depending on what you are looking for.
But You know, I’m an idiot, only been dealing with endless computer files since the early 90’s as an amateur (which makes me more of an amateur compared to people who have taken e-mail for granted since the early 80’s on a professional level) and spent 10 years fixing electronics. But according to the lizard, I’m borderline retarded for thinking that there might be a there there.
Oh, and here’s the difference between “peer review” and “open source.”
In peer review you get to be told “I feel you man, but this needs to be discussed.”
Open source consists of a known luddite going “ARE YOU SHITTING ME!?!!?”
Peer review is supposed to filter out masses of garbage and catch stupid errors, condensing a mass of reports into a manageable selection of better quality material. That material lacking proper accounting of data and methodology passed review while critical reports were systematically declined is a clear failure. But the fact remains that no one has time to read the masses of material produced on a given topic, and some kind of quality filter is needed.
Two ideas come to mind:
– A wider selection of peer review publications, perhaps enabled and made more accessible by the ‘Net. Unless they share a bias a good paper rejected by one has a fair chance of publication in another.
– An online review and system, where ratings are somehow weighted by qualifications. Maybe ratings on the comments accompanying ratings, slashdot style.
Rand, I agree that it oversight is a huge difference. Just wondering if that’s all that needs to be fixed. My hunch is that some here are confusing peer review (the initial gatekeeper function to check that the work is up to standards, a.k.a. a moderator) and the process of deciding if it is right or wrong, which comes after the research is published. I don’t think peer review is normally intended as the point where final rightness or wrongness is adjudicated.
As far as open source goes, I assume most are familiar with arxiv.org. And in some fields, notably astronomy, publicly funded data is made available in raw form. Go see the NASA MAST archive, or the Gemini science archive. Not all fields are as f-ed as climate appears to be.
I think Ray’s plan too much ignores human nature. Scientists want crap named after them. Period. However you set up the system, that will always be the goal.
Didn’t Richard Feynman often say that a scientist’s first obligation is to prove himself wrong?
Bruce,
Yes he did and here is the relevant quote from “Cargo Cult Science”
“But this long history of learning how to not fool ourselves — of having utter scientific integrity — is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis
The first principle is that you must not fool yourself — and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of his work were. “Well”, I said, “there aren’t any”. He said, “Yes, but then we won’t get support for more research of this kind”. I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing — and if they don’t support you under those circumstances, then that’s their decision.
One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results.
I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish at all. That’s not giving scientific advice.”
Everyone here needs to read the Wegman report, which was the report by the National Academy of Sciences ad hoc group on the Steve MacIntyre/Ross McKitrick controversy with Mike Mann’s original Hockey Stick paper.
The first part was where the statistical community eviscerated Mann’s statistics (incorrect mathematics + good outcome = bad science) and the second part is where the committee outlined the personal and professional connections between the “peer” community that compromised the entire concept of peer review.
Go back and read that report now, within the context of that report, and you will understand the importance of the leak of the emails and documents.
When I was in graduate school (in the humanities, alas), I worked for a short while for a professor in my university’s medical school who edited two journals having to do with physiology and endocrinology. When articles would come in, I would have to consult some databases of publications in the field, find relevant articles, and then send initial letters of inquiry to some of their authors asking if they were interested in serving as reviewers for the articles. On a few occasions, though, when I was compiling my lists of potential reviewers, the professor said, “Oh, don’t send it to anyone at THAT university.” At the time, I took it to mean that those universities were filled with bad or incompetent scientists, and perhaps that’s what the professor meant in those cases. But in light of Climategate, I have to wonder how much of this bias is already built into the peer review system, so that some kinds of articles in some fields (such as climatology) NEVER receive the sort of thorough and demanding review that they should receive–or conversely, that articles questioning the premises of other supposedly leading research in the field, never manage to see the light of day because the reviewers have an interest in silencing such questions.
I don’t think peer review is real the problem here. It’s certainly true that the AGW scoundrels manipulated the peer review process and falsley elevated it to a kind of proxy for scientific proof, but then they’re scoundrels so what else can we expect from their ilk? (It’s regrettable that people sometimes behave in this way but that’s life.)
The real problem here is the failure of the institutions involved. I’ve been reading the Climate Audit blog for a couple of years now (so none of this comes as any surprise to me). One of the most consistent themes there has been the refusal of climate scientists and their organizations to release data.
A typical case would go something like this: Mr. McIntyre (the proprietor of Climate Audit) would read a climate science paper and decide to try to duplicate its results. He would contact the author to request the data. The author would refuse. Then he would contact the controlling organziation — it might be the university where the author works, a group like the IPCC that was using the paper, the magazine that published it — and request the data, citing the organization’s own rules about disclosure of data. And he would be stiffed. Sometimes he would attempt an FOI request. And he would be stiffed.
The reasons given were varied, sometimes bureacratic, sometimes legalistic, sometimes in-your-face, but the bottom line is that for well over a decade, now, the purveyors of AGW have been getting away with making scientific claims without exposing their primary evidence, and without giving other scientists (outside the close-knit AGW peer clique) the opportunity to examine and refute their conclusions.
I don’t wonder why these individuals would attempt something as wrong and as self-serving as this. (Call me a cynic.) I wonder how they could get away with it. And for so long. Absolutely, we should hold these scientiists to account for their perfidious actions, but to me the more significant failure is that of the institutions we trust(ed) to police the scientific profession.
I wonder how they could get away with it.
Follow the money.
@Douglas: “Open source consists of a known luddite going “ARE YOU SHITTING ME!?!!?”
The luddite saying “The Emperor has no clothes” is a lot more helpful than the “We’re all friends here” attitude the e-mails are showing.
@Douglas: I take back my previous comment. I didn’t read your 9:20 in context with your 9:17. Sorry!
Remember cold fusion? There was a case where peer review did what it was supposed to do. Here, not so much.
Yeah but even those cold fusion guys were open with their data and methods so others could try to replicate their results. Actually that wasn’t peer review, that was people repeating experiments after the announcment.
Did they even have a peer reviewed paper? I remember they held a press conference, and the two that held the conference didn’t even tell the 3rd researcher who worked with them, because that guy was more skeptical and thought they weren’t ready/didn’t know what was going on.