Judith Curry, on a new paper concerning how to escape from it:
Naïvely, we might hope that by making incremental improvements to the “realism” of a model (more accurate representations, greater details of processes, finer spatial or temporal resolution, etc.) we would also see incremental improvement in the outputs. Regarding the realism of short-term trajectories, this may well be true. It is not expected to be true in terms of probability forecasts. The nonlinear compound effects of any given small tweak to the model structure are so great that calibration becomes a very computationally-intensive task and the marginal performance benefits of additional subroutines or processes may be zero or even negative. In plainer terms, adding detail to the model can make it less accurate, less useful. [Emphasis added]
Computer models can be useful in some circumstances, but they are not science.
I remember an episode of “Nova” from maybe the late 1980s, in which a computer simulation produced a graph with a certain characteristic, and another simulation with data to 4 decimal places instead of three (or perhaps it was the other way around; it’s been at least 30 years since I saw it) produced a graph with data that was very similar at first, but was wildly different by the end. It was the first time I’d ever heard about Chaos Theory, which I (mostly) reject. If there were anything to Chaos Theory, any time I flipped a coin I’d have a statistical chance of having a chicken come down instead of a quarter.
That’s… not how Chaos Theory works. At all. Your first example is pretty much the perfect description – a chaotic system depends so sensitively on its initial state that the slightest perturbation at time T=0 produces an increasingly varied result as time goes on. Magnetic pendulum systems can show this on short timescales, and are fun to watch. Go look at a Youtube video or two on the topic.
I managed to find the original Nova show on YouTube. (Blessed art thou, Internet!) They spent an inordinate amount of time trashing Sir Isaac Newton and his view of a clockwork universe without mentioning Einstein and how he took Newton to the next level of how it all worked.
Anyway, my point about the chicken is that such a possibility exists only in a computer simulation. Program, say, 50 trillion coin flips, with each flip reaching within 1% of one meter high at its apex. Then run the simulation again with the coin reaching 1.00001% of one meter. By the end of the second simulation, the results will be so wildly divergent from the first that a chicken may as well come down. In reality, assuming the universe didn’t end before I was done flipping, I’d have a nearly 50/50 split, including (almost certainly in 50 trillion flips) the few times that I’d miss the coin on its way down and it having landed on its edge. But no chickens.
(Oh, I forgot the link.)
https://youtu.be/fUsePzlOmxw
That only works for things that converge (for example, using the law of large numbers in your example). For most real world simulations, such as weather or computational fluid dynamics, the errors grow instead and the prediction fails outside of narrow constraints.
Chaos theory is just saying that errors grow exponential or better. For example, to predict a 10 orbital body position for an hour with a certain precision, you most know the position of each body within x. For two hours, you need 0.5 x, and for 3 hours you need 0.25 x. To predict something ten times longer, you may need a thousand times the precision in initial measurement.
So, for example, going from predicting 1 week’s weather to predicting 100 years weather may require so much precision that is would violate Heisenberg’s uncertainty principle.
The way around this is to change the model. But then that means the model cannot be verified for 100 years, because the model is not valid on those time scales until proven accurate. (Like, to be trusted for 100 year time scales you need 1000 years of data, and it can’t be past data due to survivor’s bias)
“Call me when your models predict, even current state from past data.”
That’s easy because they can add fudge factors and make up state variables, functions, and constants. Basically, you take a graph of past temperatures and do a rough curve-fit. Then you write hundreds of thousands of lines of code that waste 2^50 CPU cycles on irrelevant computations while plotting the curve-fit’s equation, even though the programmers aren’t astute enough to realize that’s what their code is doing, because it’s done iteratively based on all their state variables, functions, and constants.
Just approaching the problem that way, and then tuning the various parameters to give the expected answers, guarantees getting the expected answers. But the error bars aren’t any smaller on the latest IPCC reports than they were for what Arrhenius scribbled on the back of an envelope when he suggested that British coal mines were going to cause global warming.
But take heart. Today’s UK Telegraph warns us that increasing arctic ice is going to block CO2 out gassing from the oceans and plunge us into another ice age, the worst in 2.5 million years.
For your amusement, Michael Mann has a new piece up at Newsweek (which sold for a dollar): Climate Change Is Burning Down California
Poor land management practices are burning down California. If it were climate change the mitigation practices in Scandinavia would have had little to no effect.