“Despite understanding the basic processes underlying the physics of the climate system, it is clear that the state-of-the-art climate models are not ‘good enough’, if we desire high resolution predictions with high temporal and spatial resolutions over coming decades. Thus far we seem to have only built sufficient confidence in the broad scale response of temperature and precipitation.”
so you take the models and lower the spatial resolution and temporal resolution.
High resolution predictions are needed for weather forecasting, climate forecasting only needs to tell me
what the average rainfall for the midwest for a month will be and what the sigma variation will be.
Weather is a chaotic system. If you do low resolution modelling you risk your predictions being completely off the mark. Then there is the other issue in such chaotic systems which is that errors accumulate when trying to extrapolate long term trends into completely off the wall values which have nothing to do with reality.
There is a reason why I don’t put a lot of trust into long term weather forecasting. They are fairly accurate 2-3 days off but the longer time the prediction the more wrong their prediction seems to be.
And? Climate models can’t do that.
“so you take the models and lower the spatial resolution and temporal resolution.”
What do you mean by that? Regarding the models, what is meant by lowering the spatial and temporal resolutions?
What are the effects of lowering those resolutions?
by lowering spatial resolution, typically you are looking at increasing the grid volumes.
so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs,
but you are not asking the model to try and predict weather for the week, you are asking the model
to try and give you a range of precipitation for say the American plain west of the rockies, or
monthly average daytime and night time temps with some variation for Arizona, utah, New Mexico
over the next 5 years.
In weather it’s really important to try and get spatial and temporal precision because if a farmer is
trying to figure out if they should get the cattle out of the north 40 or put down fertilizer the rain forecast for 48 hours is important. However, in pricing commodities or estimating agricultural production, you can probably with broad scales and low temporal precision. It probably works out well enough to figure if
the summer is going to be hot and dry.
USDA does this sort of thing all the time. It’s how they decided to move the planting bands north one band.
Rand goes on howling about climate models but, the agriculture industry and markets have to use the best tools they have to predict what’s going to happen and try and price the uncertainty.
Stop howling about things you don’t understand, you moron.
Sure, such models are well known and widely employed. But, they have to be validated against either more extensive models, and/or ultimately reality itself.
The 18 year halt in temperature rise invalidates the models which have projected CO2 as the dominant control knob affecting our climate. Clearly, the models are too simple, or outright wrong.
“so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs, ”
Just as I thought…you haven’t the fainteest idea of what you are talking about.
It means you can get a thousand times as many grids completely wrong. 🙂
snicker
🙂
the agriculture industry and markets have to use the best tools they have to predict what’s going to happen and try and price the uncertainty.
Are you talking about the Farmer’s Almanac?
My thoughts as well, and the Farmer’s Almanac only predicts a little more than a year in advance. A more properly calibrated temporal resolution, if you will.
so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs,
Computing power is not the problem here. The accuracy of the models is. By making this change, you create a bunch of more inaccurate models that have even less relevance to today’s climate than the current models do.
Taking it to the logical conclusion, you could just not compute anything at all and save bunches of CPU cycles.
you could compute it as one cube, make a stab at a radiation balance
and estimate warming from greenhouse gasses.
It won arhenius the Nobel Prize doing just that.
you could compute it as one cube, make a stab at a radiation balance and estimate warming from greenhouse gasses.
Yes, “you” could waste “your” time doing that, if “you” were a moron. Like you.
Arrhenius won the 1903 Nobel Prize in Chemistry for his work in advancing the theory of acids and bases. His climate theories (which amount to the theory of equilibrium climate sensitivity to atmospheric CO2) have yet to be validated and appear not to have much predictive power.
Arrhenius guessed the climate sensitivity to be between 1 and 5 degrees C for the first doubling. With our current networked supercomputers and infinitely better physics models, the IPCC AR5 report narrow that guess to 1.2 to 4.7 degrees C.
I’m sure you’ll claim that is progress, but if medical science advanced at that rate we’d be debating whether you can treat liver failure with eight leeches instead of nine.
you could compute it as one cube, make a stab at a radiation balance
and estimate warming from greenhouse gasses.
It won arhenius the Nobel Prize doing just that.
What was the point to posting that supposed to be? Do you think there’s enough accuracy and reliability in a “one cube” model to justify changing the world’s economy over it?
As a beta nerd, it is fun to argue with alpha nerds but you have to pick your battles carefully.
My understanding is that the real problem with the models is in getting accurate (within .1 degree) pre-20th century temperatures to use in them. Without that data, the models are pretty useless for predicting anything.
Indeed. That’s first-year science. If your thermometer only has one degree increments, then your best possible accuracy (i.e. no other possible sources of error) is plus or minus 1/2 degree. Making predictions of higher accuracy than your available data is invalid.
And errors are cumulative. If you have two weather stations, each with +/- 1/2 degree error, then the smoothed average of the two stations has a cumulative error bar of 1 degree. With four stations, your error bar is up to +/- two degrees.
Using thousands of weather stations to derive a model, and then claiming a model accuracy of more than ten times the accuracy of a single weather station (by making a prediction of 0.2 degree C warming over a decade) is so wrong I can’t find a metaphor.
Even if the models were run with high enough resolution, the preponderance of evidence says they get a few important assumptions wrong.
Speaking of climate models, I’ve been thinking of recreating a plug of atmosphere in a centrifuge. The mass of the Earth’s atmosphere is about 10,000 kg per square meter (giving you a surface pressure of 101,325 Pascals). You could put that mass in a pressure vessel but you’d lose the dynamics of having it open-topped, with adiabatic heating, changes in volume, decreasing temperature with altitude, and other effects. But you could stick it in a centrifuge at high G’s and retain many of those properties, along with full optical depth. There would be pressure broadening of the spectral lines, though.
Anyway, taking a plug of atmosphere to 60 km (197,000 feet), you have the height of the gas at 1 G. Taking this as a reference point, you can squeeze it by increasing the acceleration, such that the column experiences 588,000 (meters height) * acceleration (m/sec^2). The acceleration in a centrifuge is omega squared * radius, and the hoop stress of a thin spinning cylinder is density (kg/m^3) * omega squared * radius squared, or rho * accel * radius. So you need a material with a specific strength greater than 600 kPa / kg, and carbon-fiber epoxy composite comes in at about 780 kPa/kg.
I’ve written a short program that walks through the sections of the spinning atmosphere to give pressure and temperature curves (the local adiabatic lapse rate is G/cp, where cp is 1003.5 J/kg.K for dry air), and it looks workable. You could run at 5,000 G’s and a 10 meter radius, for example, and the air pressure will fall off to near nothing before reaching the axis.
At that point you’d have a piece of lab equipment that can emulate a plug of the Earth’s atmosphere, so you can introduce water (and mini oceans), sunlight, CO2, pollen, volcanic sulfur compounds, and accurately measure the effects from the surface to the top of atmosphere.
You could also run it with a standard surface pressure, losing a lot of your optical depth but avoiding the pressure spreading of the absorption spectra.
As they say, in science, all it should take to overturn a prevailing theory is one good experiment.
“Despite understanding the basic processes underlying the physics of the climate system, it is clear that the state-of-the-art climate models are not ‘good enough’, if we desire high resolution predictions with high temporal and spatial resolutions over coming decades. Thus far we seem to have only built sufficient confidence in the broad scale response of temperature and precipitation.”
so you take the models and lower the spatial resolution and temporal resolution.
High resolution predictions are needed for weather forecasting, climate forecasting only needs to tell me
what the average rainfall for the midwest for a month will be and what the sigma variation will be.
Weather is a chaotic system. If you do low resolution modelling you risk your predictions being completely off the mark. Then there is the other issue in such chaotic systems which is that errors accumulate when trying to extrapolate long term trends into completely off the wall values which have nothing to do with reality.
There is a reason why I don’t put a lot of trust into long term weather forecasting. They are fairly accurate 2-3 days off but the longer time the prediction the more wrong their prediction seems to be.
And? Climate models can’t do that.
“so you take the models and lower the spatial resolution and temporal resolution.”
What do you mean by that? Regarding the models, what is meant by lowering the spatial and temporal resolutions?
What are the effects of lowering those resolutions?
by lowering spatial resolution, typically you are looking at increasing the grid volumes.
so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs,
but you are not asking the model to try and predict weather for the week, you are asking the model
to try and give you a range of precipitation for say the American plain west of the rockies, or
monthly average daytime and night time temps with some variation for Arizona, utah, New Mexico
over the next 5 years.
In weather it’s really important to try and get spatial and temporal precision because if a farmer is
trying to figure out if they should get the cattle out of the north 40 or put down fertilizer the rain forecast for 48 hours is important. However, in pricing commodities or estimating agricultural production, you can probably with broad scales and low temporal precision. It probably works out well enough to figure if
the summer is going to be hot and dry.
USDA does this sort of thing all the time. It’s how they decided to move the planting bands north one band.
Rand goes on howling about climate models but, the agriculture industry and markets have to use the best tools they have to predict what’s going to happen and try and price the uncertainty.
Stop howling about things you don’t understand, you moron.
Sure, such models are well known and widely employed. But, they have to be validated against either more extensive models, and/or ultimately reality itself.
The 18 year halt in temperature rise invalidates the models which have projected CO2 as the dominant control knob affecting our climate. Clearly, the models are too simple, or outright wrong.
“so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs, ”
Just as I thought…you haven’t the fainteest idea of what you are talking about.
It means you can get a thousand times as many grids completely wrong. 🙂
snicker
🙂
the agriculture industry and markets have to use the best tools they have to predict what’s going to happen and try and price the uncertainty.
Are you talking about the Farmer’s Almanac?
My thoughts as well, and the Farmer’s Almanac only predicts a little more than a year in advance. A more properly calibrated temporal resolution, if you will.
so rather then computing say 100 M cubes, you start computing 10 KM x 10 M x 1 KM volumes
that reduces the computer work by 10,000, then instead of computing on minute or hour scale
you try computing on day to month scale, that means averaging a lot of inputs,
Computing power is not the problem here. The accuracy of the models is. By making this change, you create a bunch of more inaccurate models that have even less relevance to today’s climate than the current models do.
Taking it to the logical conclusion, you could just not compute anything at all and save bunches of CPU cycles.
you could compute it as one cube, make a stab at a radiation balance
and estimate warming from greenhouse gasses.
It won arhenius the Nobel Prize doing just that.
you could compute it as one cube, make a stab at a radiation balance and estimate warming from greenhouse gasses.
Yes, “you” could waste “your” time doing that, if “you” were a moron. Like you.
Arrhenius won the 1903 Nobel Prize in Chemistry for his work in advancing the theory of acids and bases. His climate theories (which amount to the theory of equilibrium climate sensitivity to atmospheric CO2) have yet to be validated and appear not to have much predictive power.
Arrhenius guessed the climate sensitivity to be between 1 and 5 degrees C for the first doubling. With our current networked supercomputers and infinitely better physics models, the IPCC AR5 report narrow that guess to 1.2 to 4.7 degrees C.
I’m sure you’ll claim that is progress, but if medical science advanced at that rate we’d be debating whether you can treat liver failure with eight leeches instead of nine.
you could compute it as one cube, make a stab at a radiation balance
and estimate warming from greenhouse gasses.
It won arhenius the Nobel Prize doing just that.
What was the point to posting that supposed to be? Do you think there’s enough accuracy and reliability in a “one cube” model to justify changing the world’s economy over it?
As a beta nerd, it is fun to argue with alpha nerds but you have to pick your battles carefully.
My understanding is that the real problem with the models is in getting accurate (within .1 degree) pre-20th century temperatures to use in them. Without that data, the models are pretty useless for predicting anything.
Indeed. That’s first-year science. If your thermometer only has one degree increments, then your best possible accuracy (i.e. no other possible sources of error) is plus or minus 1/2 degree. Making predictions of higher accuracy than your available data is invalid.
And errors are cumulative. If you have two weather stations, each with +/- 1/2 degree error, then the smoothed average of the two stations has a cumulative error bar of 1 degree. With four stations, your error bar is up to +/- two degrees.
Using thousands of weather stations to derive a model, and then claiming a model accuracy of more than ten times the accuracy of a single weather station (by making a prediction of 0.2 degree C warming over a decade) is so wrong I can’t find a metaphor.
Even if the models were run with high enough resolution, the preponderance of evidence says they get a few important assumptions wrong.
Speaking of climate models, I’ve been thinking of recreating a plug of atmosphere in a centrifuge. The mass of the Earth’s atmosphere is about 10,000 kg per square meter (giving you a surface pressure of 101,325 Pascals). You could put that mass in a pressure vessel but you’d lose the dynamics of having it open-topped, with adiabatic heating, changes in volume, decreasing temperature with altitude, and other effects. But you could stick it in a centrifuge at high G’s and retain many of those properties, along with full optical depth. There would be pressure broadening of the spectral lines, though.
Anyway, taking a plug of atmosphere to 60 km (197,000 feet), you have the height of the gas at 1 G. Taking this as a reference point, you can squeeze it by increasing the acceleration, such that the column experiences 588,000 (meters height) * acceleration (m/sec^2). The acceleration in a centrifuge is omega squared * radius, and the hoop stress of a thin spinning cylinder is density (kg/m^3) * omega squared * radius squared, or rho * accel * radius. So you need a material with a specific strength greater than 600 kPa / kg, and carbon-fiber epoxy composite comes in at about 780 kPa/kg.
I’ve written a short program that walks through the sections of the spinning atmosphere to give pressure and temperature curves (the local adiabatic lapse rate is G/cp, where cp is 1003.5 J/kg.K for dry air), and it looks workable. You could run at 5,000 G’s and a 10 meter radius, for example, and the air pressure will fall off to near nothing before reaching the axis.
At that point you’d have a piece of lab equipment that can emulate a plug of the Earth’s atmosphere, so you can introduce water (and mini oceans), sunlight, CO2, pollen, volcanic sulfur compounds, and accurately measure the effects from the surface to the top of atmosphere.
You could also run it with a standard surface pressure, losing a lot of your optical depth but avoiding the pressure spreading of the absorption spectra.
As they say, in science, all it should take to overturn a prevailing theory is one good experiment.