A good survey from The Economist why we can’t blindly accept the “authority” of “science” or scientists:
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
It’s a mess.
“Publish or perish” – when you’re under the pressure to produce something, anything, the end result will often be poor.
There’s always been pressure. The change is quality of character.
Much worse now. It is actually taken seriously as a productivity metric for both funding purposes and career advancement. In the past it was only seen as value add and the quality of the work itself was more important.
It is fine to have quantitative metrics on evaluations but not when the thing you are measuring is pointless to measure.
As little as 5 years ago, a science blogger came down hard on my comment to an article in his blog. I had been laying out some of the paths by which more and more parts of science were being bought by the State, to advance the interests of State hierarchies. He stated that top-notch people in a field are interested only in doing the science. He admitted there were “plenty of second-raters”, but he said “almost all those go to industry”.
He would not admit that the academic progressive monoculture affected the way science was done. Much less did he admit that all those helpful administrative personnel assigned to help get science grant proposals written and accepted were helping select which work would get done, by hinting strongly what would be easiest to get accepted, and thus bring in money for “administrative overhead”. Academic science has participated, to a lesser degree, in the general bloat of academia over the last 65 years. Thus, being protective of it is natural for academic scientists.
That is why we must take their statements on this with a large tumbler of salt. I wonder if he’ll blog about the Economist article.
He admitted there were “plenty of second-raters”, but he said “almost all those go to industry”.
That’s a typical academic attitude. People who “go to industry” have fallen from heaven and become “second-rate” by definition.
You often find the similar opposite reaction from people in some parts of the industry however. Everyone who comes from academia must be a slacker who couldn’t get a proper job. Both perspectives are wrong.
Especially in Computer Science when you have people like Edwin Catmull, John Warnock, Stephen Wolfram, etc who have both impeccable academic credentials and remarkable leadership and business acumen.
‘He admitted there were “plenty of second-raters”.’ There sure are. I was one of them in graduate school in Berkeley – physics and applied math. By the time I got my PhD, I was sure I didn’t want to continue in my chosen fields. I was very very good at math – but it had become obvious that I was killing myself spending years doing things that some other people around me could do in ten minutes if they wanted to. A few really awesome mathematicians were doing important stuff, and the rest of us were basically scavengers, making a livelihood filling in gaps that they had not bothered with.
I remember that someone at my alma mater mentioned to me that he had known some of the great physicists (Feynman, Gell-mann) and that they mostly didn’t read journals. They looked over them, and anything that looked interesting they would sit down and work out themselves.
I got a job in a field where my skills were actually needed.