He was one of the greats of early blogging, and a brilliant man in many fields. I have to confess that I feel partially responsible (though I’m sure I was far from alone) in chasing him away from blogging with an ill-thought email. I think I later apologized, but if I didn’t, Steven, if you can read this, please accept my deepest apologies.
The climate debate has an epistemological problem that proponents of policy to deal with it want to pretend doesn’t exist. I wrote about the flawed precautionary principle years ago.
What is the import of Lorenz? Literally ALL of our collective data on historic “global atmospheric temperature” are known to be inaccurate to at least +/- 0.1 degrees C. No matter what initial value the dedicated people at NCAR/UCAR enter into the CESM for global atmospheric temperature, it will differ from reality (from actuality – the number that would be correct if it were possible to produce such a number) by many, many orders of magnitude greater than the one/one-trillionth of a degree difference used to initialize these 30 runs in the CESM-Large Ensemble. Does this really matter? In my opinion, it does not matter. It is easy to see that the tiniest of differences, even in a just one single initial value, produce 50-year projections that are as different from one another as is possible(see endnote 1). I do not know how many initial conditions values have to be entered to initialize the CESM – but certainly it is more than one. How much more different would the projections be if each of the initial values were altered, even just slightly?
This has always been pretty obvious to me. What does it mean? That we cannot model it into the future with any confidence whatsoever.
…through formal software verification. This seems like sort of a big deal. Particularly in the era of the Internet of Things and self-driving cars. Of course, the weakest link in security will remain the flawed unit between the seat and the keyboard.