Judith Curry, on a new paper concerning how to escape from it:
Naïvely, we might hope that by making incremental improvements to the “realism” of a model (more accurate representations, greater details of processes, finer spatial or temporal resolution, etc.) we would also see incremental improvement in the outputs. Regarding the realism of short-term trajectories, this may well be true. It is not expected to be true in terms of probability forecasts. The nonlinear compound effects of any given small tweak to the model structure are so great that calibration becomes a very computationally-intensive task and the marginal performance benefits of additional subroutines or processes may be zero or even negative. In plainer terms, adding detail to the model can make it less accurate, less useful. [Emphasis added]
Computer models can be useful in some circumstances, but they are not science.