Like the folks writing anti-SDI pieces in Scientific American, who could be counted on to consistently err in the direction of exaggerating SDI’s cost (IIRC one article miscalculated the mass of a piece of orbital hardware by 3 orders of magnitude).
Speaking as a professional software developer, you almost never look for a bug in a program that’s giving you the answers you expect.
Which, sad to say, is even more common in people who aren’t professionals (accountants building spreadsheets, economists building financial models, etc.)
So if your entire reason for writing the program is to “prove” something, you can expect to “prove” it.
There’s way more to this than meets the eye. For one thing, the “average temperature” of the earth is calculated as the arithmetic mean of the geographic midpoint temperatures of 8,000 equal-area subdivisions of the earth’s surface (each roughly the area of West Virginia). Not a single one of those areas actually has a temperature measuring station at it’s geographic midpoint. The temperatures are “interpolated” from the nearest actual stations, which number at 2,700 world wide (all of them on land, which is 30% of the earth’s surface). So we are adding up 2,700 temperatures (at most: sometimes “outliers” are discarded), dividing them by 2,700, and subtracting from that the 30-year average of the temperature calculated by the same means. And even though most of the actual measurements are accurate to less than +/- 0.5 C, the results of the “anomaly” are reported to the second decimal place.
As an engineer who is familiar with measurement, and who cares about the quality of data he uses, I can say the following. Even without the interpolation, averaging the 2,700 temperature readings can be reported to no more accuracy than +/- 0.5 C. Averaging measurements of different things does not improve the accuracy of the measurement. Only the average of multiple measurements of the very same thing can be taken as more accurate, and then only if the differences from measurement to measurement are random.
More importantly, however, is the fact that temperature by itself is no indication of warmth. The enthalpy (energy content) of air at the earth’s surface is also a function of humidity. Just as an example, at 22.5 C and 40% relative humidity, the energy content of dry air is 40 J/gm. At 22.5 C and 90% RH, the energy content is 62 J/gm. That 55% more energy content at the same temperature. 100% humid air at 22.5 C contains 8.9 times the energy of 0% humid air at the same temperature.
I would say that the effect of humidity is first order, and that until we have energy anomaly based on simultaneous wet and dry bulb temperatures, we can say nothing whatsoever as to whether the earth is warming or not.
Unlike the alt-Climate crowd, I don’t think you should be imprisoned if you don’t agree…
Unlike the alt-Climate crowd, I don’t think you should be imprisoned if you don’t agree…
Well, then, you are obviously a Denier, and must be not imprisoned, but BURNED AT THE STAKE. WITCH!!!1!!1!
MfK, Thank you for pointing that out, which any freshman college physics student will know.
I once looked up how the satellite temperatures were done and apparently the sensor is only good to +/- 1 deg C. So as you correctly point out, that is as good as it gets.
I would doubt that the surface temperature record is any better. I collected a small part of it for 3 years, about 45 years ago.
As an engineer who is familiar with measurement, and who cares about the quality of data he uses, I can say the following. Even without the interpolation, averaging the 2,700 temperature readings can be reported to no more accuracy than +/- 0.5 C. Averaging measurements of different things does not improve the accuracy of the measurement. Only the average of multiple measurements of the very same thing can be taken as more accurate, and then only if the differences from measurement to measurement are random.
The idea behind using multiple measurements to reduce precision error is that the measurements are correlated in a relevant way while the error is not. The measurements don’t need to be of the very same thing (in other words, perfectly correlated) in order to work effectively. And global mean temperature is precisely the sort of aggregate variable that would be associated with the correlated parts of temperature measurements at thousands of world-wide locations.
So uniformly over the course of a year (to negate the annual systemic biases due to the changing of the seasons), you’re speaking of thousands of sites and somewhere around a thousand temperature observations per site (say 2 to 4 temperature takings per day). Second decimal place digit precision in the face of several million observations (each at the +- 0.5 C level) isn’t unrealistic.
Basically, you would have reduced the random error to one over the square root of the number of samples taken, meaning variation due to random error is now 0.5 C / sqrt( several million). That’s below three significant digits in effect now.
The real problem at that point are systemic biases that have nothing to do with the random error.
But, the problem, as MfK points out, is that temperature is an intensive variable, and an average of temperatures is thus not a rigorously defined physical variable.
E.g., suppose you have a flask of water at 20 degC, and another of ethanol at 30 degC. Which one is in a higher energy state? The answer: the flask of water, since its specific heat capacity is 1.74X that of ethanol.
If you mix the two together, neglecting the heat of the flasks themselves, what is the final temperature? 25 degC? No, it is 23.7 degC.
The surface of the Earth is composed of wet and dry regions, with very different heat capacities. The heat inventory of the Earth will generally be in the neighborhood of proportionality to the average temperature, but when you are working with anomalies on the order of tenths of a degree, it matters a great deal to do a precise calculation.
But, the problem, as MfK points out, is that temperature is an intensive variable, and an average of temperatures is thus not a rigorously defined physical variable.
I disagree. First, we can make point measurements of temperature (follows from the definition of intensive variables) and then average that over the surface area of the Earth (fairly standard multi-variable calculus). So if sampling temperature was somehow mathematically rigorous, you’d be set even though temperature is intensive.
The only lack of rigor possible is in the measurement of said temperatures which can be biased by a variety of things and the attempts to adjust the resulting temperature data for a variety of things, such as filtering out the effects of albedo changes from human land use, the non-uniform spatial distribution of temperature sampling, or changes in temperature measurements and locations.
However, that lack of rigor does at the small temperature variations studied allow for a lot of error, wishful thinking, and deception.
I think you are missing the point. What does average temperature physically represent?
I am not a temperature denier — it is indeed getting warmer. I am, however, a (luke-partial pressure instead of lukewarm?) CO2 denier.
We know for a certainty to the extent that anything can be certain that the CO2 level in the atmosphere is increasing. We also know this represents net emission of CO2 into the atmosphere (excess of sources over sinks) because there is nothing in the atmosphere that disassociates or otherwise destroys CO2.
In the absence of any natural increase in CO2 emissions, we know only half of the human CO2 emissions are ending up in the atmosphere and the other half ends up in sinks. Of the half going into sinks, half of that is being absorbed in the inorganic ocean-water carbonates and the remaining half is being absorbed by photosynthetic plant life. We know of this split because of highly accurate chemical-analytical techniques for measuring atmospheric oxygen concentration changes resulting from burning fossil fuels. The amount of CO2 going into the oceans is in response to the non-linear chemical-equilibrium Revelle Buffer mechanism along with some assumptions regarding mixing of surface with deep ocean waters. The assumptions explaining the net ocean absorption in response to increase in atmospheric CO2 are consistent with carbon isotope measurements.
We also know that plant life absorbs much more CO2 than is emitted by humans, but much of what plants absorb gets emitted back into the atmosphere when the leaves and dead tree trunks fall and rot. There is evidence of a strong sensitivity of the rate of rotting and hence CO2 re-emission in response to temperature — see the Wood-for-Trees site http://woodfortrees.org/. This evidence is in the form of year-to-year variation in the net emission (same as increase in atmospheric CO2) that is of roughly the same scale as the human emission that is judged to be more constant as human economic and industrial activity doesn’t fluctuate nearly that much, recessions notwithstanding.
Were there no temperature-driven (i.e. thermally stimulated) natural CO2 emissions, the Standard Model of half of human CO2 goes into the atmosphere, half goes into sinks, half-of-that-half goes into the ocean, the other half-of-that-half is the net increase of carbon in soils put there by way of plant life, with nearly all of the increase in atmospheric CO2 being the fault of us humans, that model would be quite accurate.
With the Wood-for-Trees data, something must account for the fluctuations strongly suggesting a natural source of thermally-stimulated CO2. That means that the warmer it gets, the more CO2 is driven out of rotting dead plants. Are we all going to die from a Runaway Greenhouse? It has been getting warmer, so if there is dangerous positive feedback of such thermally driven CO2 emission, it has to be balance by a safety negative feedback in the form of more vigorous plant growth in response to the increase in atmospheric CO2 — this has to be to explain why the CO2 levels haven’t already run away.
Pieter Tans, who is the National Oceanic and Atmospheric (NOAA) Carbon Cycle guru/maven, knows about the Wood for Trees data, and he “explains it away” by arguing that the effect is from rotting leaves in the tropics, which don’t accumulate because they all rot away in a couple years time, hence the natural thermally driven CO2 emission “only operates over short time scales” and the Standard Model of humans are at fault is safely maintained.
This evening I have devised a counter argument to this. Most of the leaves that fall from the trees rot and return the CO2 captured by plants back into the air. But from the Carbon Balance, we know that there is a small, net sink of carbon into the soil that is a full quarter of the human emissions in the Standard Model.
In the absence of any physical model to be proposed by Dr. Tans, let’s suppose that any leaf that falls rots away at a rate characterized by a decaying exponential’s time-constant tau. Until a leaf completely rots away, lets assume that a much smaller part of its carbon gets sequestered into the soil at a much smaller rate, but the fraction of the carbon sequestered that way is proportional to how long the leaf sets there before it goes poof! back into the air. The model is probably more complicated in terms of leaf piles and the dank, yucky stuff at the bottom of leaf piles seeping into the ground, but you get the idea.
So one leaf decays according to leaf-unit*exp(-t/tau). Its total exposure to the ground is integral_0_infinity leaf_unit*exp(-t/tau) that evaluates to leaf_unit*tau. In other words the carbon sequestered is some small fraction of leaf_unit*tau, but it is proportional to time-constant tau. Suppose tau (for small changes) varies with temperature. This means that the average carbon sequestered is the average of tau over the years, and this doesn’t matter if tau is only 2-3 years as Tans claims.
Hence unless someone comes with a better leaf decay model, the thermally stimulated emission occurs over all time scales, not just the “couple years” claimed by Pieter Tans. It is getting warmer, and this warmth is shortening tau and reducing the rate at which the soils sequester carbon, which means, according to my calculations, that the thermally-driven contribution to the increase in atmospheric CO2 everyone worries about is equal to the human contribution to that increase.
If the thermally stimulated CO2 emission in response to warming is that strong, and if our measures of atmospheric CO2 are to be trusted, it means that plant uptake in response to increased CO2 has to be stronger than thought, providing a vigorous negative feedback.
Bart disagrees with me on some points, saying I am not taking into account upwelling from the deep ocean. But whether you believe me or Bart, the Standard Model that blames humans for all of the increase in atmospheric CO2 is off by a factor 2 or maybe more. So yes, it has been warming, and that warming has contributed to at least half of the increase in CO2 observed already, and if warming is driving that much CO2, plant uptake has to be vigorous to account for the levels of CO2 already seen.
As to the remarks that I am not to be taken seriously, I am serious about my observations regarding people of different races and incomes living together, and I am serious about my study of the Carbon Cycle.
As to my engaging in hyperbole or my defending President Trump for using bad words, I defend everyone using bad words, including someone who uttered (in electronic print) bad words as I watched in dismay and a person who has been in a pile of trouble ever since, even if the Wheels of Justice clank around to come up with a favorable resolution, which I fervently hope they do, Rand.
The thing is, the rate of change of atmospheric CO2 matches the temperature over the long term:
It matches both the long term trend, and the short term variation, explaining virtually the entire series in the era since we have had reliable CO2 measurements starting in about 1958.
Human inputs need not apply. They are obviously being dealt with in a ratio of much greater than 1:1.
I explained my conjecture for what is driving it here:
The key part is, it is a flow problem. In rather a bit of irony, the mechanism to which I refer is analogous to the GHE itself.
The GHE says that, if the output radiated energy from the surface is impeded, the energy must pool at the surface until the obstacle can be surmounted by other means, those other means being an increase in temperature at the surface (though, in a convective environment, there are actually other means to surmount the obstacle).
Just so, an impedance to downwelling transport of CO2 due to rising surface temperature must result in pooling of CO2 at the surface interface until the obstacle can be surmounted by other means, those means being the very long term equilibration of temperatures all along the downwelling and upwelling ocean current paths.
Maybe I’ve got the correct mechanism, and maybe I don’t. But, what is irrefutable is that the data show a temperature to rate of change relationship, and that relationship contradicts significant dependence on human combustion of fossil fuels.
It is very likely that we are on the cusp of the downcycle of the ~65 year cyclic phenomenon observable in temperature records. You can readily see this cycle in the detrended data here:
There was an anomalous blip in the pattern over the past few years due to the monster El Nino, but I do expect reversion to the pattern in the near future, and it is said there is a monster La Nina brewing.
If that reversion occurs, and temperatures drop, then the rate of change of atmospheric CO2 also will drop. CO2 emissions are not likely to drop, if at all, so it should be a good test of the relationship that may force it to be taken seriously by those who currently have closed their minds to any possibility that human activities are not responsible.
Glenn writes, “Speaking as a professional software developer…” Oh, that’s rich. When I was in school, the computer science types were the guys who wanted to major in engineering but couldn’t handle the math. Please…
Glenn writes, “Which, sad to say, is even more common in people who aren’t professionals (accountants building spreadsheets, economists building financial models, etc.)….”
So you think you’re more ‘professional’ than these other people how? Geez…
Do you have a substantive critique of Glenn’s comment?
Naturally not.
Or it would have been provided, one assumes.
(I especially like how he talks about not having enough math, but Glenn’s entire critique was about software as such and debugging, which does not require even algebra to evaluate.
“You don’t look for bugs in code that gives you the answers you want” is a truth about human nature [unless one tries very hard to fight it] that requires not one whit of mathematical knowledge to notice and point out.)
Correct on all counts.
Couldn’t handle the Physics perhaps? Computer Science courses have a lot of Math classes. In fact even more than someone with an actual Physics degree usually have. Then again I was trained as a Computer Engineer instead so I had both those and the Physics classes. I was made to learn Kinematics, Electromagnetism, Thermodynamics, Relativity, Early Quantum Mechanics, on the Physics classes. In the Maths classes I had lovely lessons in Abstract Algebra (oh yum), Logic (inc. Proof Theory and Higher-Order Logic) besides all the prerequisite Maths classes for the things I we need to use in Physics (i.e. Calculus and Statistics). As an Undergrad.
It felt mostly like a waste of time back then and it actually was. Really.
My comment at the top level is, as usual, “when all your errors are in one direction, they’re not even accurate errors”.
Real, natural errors averaged over time should be in both directions, sometimes too cold, sometimes too hot; errors only in one direction indicate the underlying “correction” or modeling is tainting the outcome noticeably.
There are contexts in which it is best to force your errors in one direction or another, to prevent certain failure states.
But basic science to establish a factual view of the world is very much not the place to force errors to one side like that.
In fact, if we assume more AGW than I expect (I expect not-zero, but relatively low magnitude, currently – “not a crisis”), forcing the errors is a negative in terms of outcome, because it’s making the actual warming look sketchy.
Honesty and rigorous science produce trust in the results. Putting a finger on the scale makes people distrust you even when you’re right, overall.
The modelers usually come back with a claim that the “adjustments” have actually lowered the long term trend. CliSci worships trend lines. They think there is something magical about drawing a line through data that minimizes the mean square distance between the points.
But, whether the trend changed or not is immaterial. What they have done is eliminate features that do not correlate with the CO2 data, and hence have altered the data to agree more closely with their premise that CO2 is driving temperatures.
1) Are temperatures rising?
2) Are we driving the change?
3) Are rising temperatures good or bad?
4) If we are, and it is bad, then what can we do about it?
We argue so much about items 1 & 2 that we really don’t get much into 3 & 4. All kinds of scare tactics are used to answer #3 as “bad”, but they don’t hold up well to scrutiny. But, it is not the central battleground, and the claim of everything bad gets relatively little pushback.
Question #4 gets hardly any pushback at all. It is taken as a given that solar and wind power are good for the environment. In fact, they are horrendous. Solar power requires manufacture using caustic chemicals and produces monumental waste. The takeover of vast swathes of land required to generate significant power destroys habitat, and creates a heat island effect on steroids.
Wind power has most of the detractions of solar power, plus it slices and dices ecologically crucial raptors and carrion fowl, as well as insect-controlling bats.
The “cure” is worse than the purported disease. The whole thing is a scientific fiasco and an unfolding ecological disaster, driven by a combined front of neurotic and cynical players.
Good post, Bart. I’m quoting Rand poorly, but he’s said something like if climate change was really about climate change, we’d be seriously pursuing nuclear power. But it isn’t, so we’re not.
Bart:
“The “cure” is worse than the purported disease. The whole thing is a scientific fiasco and an unfolding ecological disaster, driven by a combined front of neurotic and cynical players.”
Don’t get me started on lead free solder. I was hearing “nuke Brussels” from my wife when we were forced to use it for one product.
Like the folks writing anti-SDI pieces in Scientific American, who could be counted on to consistently err in the direction of exaggerating SDI’s cost (IIRC one article miscalculated the mass of a piece of orbital hardware by 3 orders of magnitude).
Speaking as a professional software developer, you almost never look for a bug in a program that’s giving you the answers you expect.
Which, sad to say, is even more common in people who aren’t professionals (accountants building spreadsheets, economists building financial models, etc.)
So if your entire reason for writing the program is to “prove” something, you can expect to “prove” it.
There’s way more to this than meets the eye. For one thing, the “average temperature” of the earth is calculated as the arithmetic mean of the geographic midpoint temperatures of 8,000 equal-area subdivisions of the earth’s surface (each roughly the area of West Virginia). Not a single one of those areas actually has a temperature measuring station at it’s geographic midpoint. The temperatures are “interpolated” from the nearest actual stations, which number at 2,700 world wide (all of them on land, which is 30% of the earth’s surface). So we are adding up 2,700 temperatures (at most: sometimes “outliers” are discarded), dividing them by 2,700, and subtracting from that the 30-year average of the temperature calculated by the same means. And even though most of the actual measurements are accurate to less than +/- 0.5 C, the results of the “anomaly” are reported to the second decimal place.
As an engineer who is familiar with measurement, and who cares about the quality of data he uses, I can say the following. Even without the interpolation, averaging the 2,700 temperature readings can be reported to no more accuracy than +/- 0.5 C. Averaging measurements of different things does not improve the accuracy of the measurement. Only the average of multiple measurements of the very same thing can be taken as more accurate, and then only if the differences from measurement to measurement are random.
More importantly, however, is the fact that temperature by itself is no indication of warmth. The enthalpy (energy content) of air at the earth’s surface is also a function of humidity. Just as an example, at 22.5 C and 40% relative humidity, the energy content of dry air is 40 J/gm. At 22.5 C and 90% RH, the energy content is 62 J/gm. That 55% more energy content at the same temperature. 100% humid air at 22.5 C contains 8.9 times the energy of 0% humid air at the same temperature.
I would say that the effect of humidity is first order, and that until we have energy anomaly based on simultaneous wet and dry bulb temperatures, we can say nothing whatsoever as to whether the earth is warming or not.
Unlike the alt-Climate crowd, I don’t think you should be imprisoned if you don’t agree…
Unlike the alt-Climate crowd, I don’t think you should be imprisoned if you don’t agree…
Well, then, you are obviously a Denier, and must be not imprisoned, but BURNED AT THE STAKE. WITCH!!!1!!1!
MfK, Thank you for pointing that out, which any freshman college physics student will know.
I once looked up how the satellite temperatures were done and apparently the sensor is only good to +/- 1 deg C. So as you correctly point out, that is as good as it gets.
I would doubt that the surface temperature record is any better. I collected a small part of it for 3 years, about 45 years ago.
As an engineer who is familiar with measurement, and who cares about the quality of data he uses, I can say the following. Even without the interpolation, averaging the 2,700 temperature readings can be reported to no more accuracy than +/- 0.5 C. Averaging measurements of different things does not improve the accuracy of the measurement. Only the average of multiple measurements of the very same thing can be taken as more accurate, and then only if the differences from measurement to measurement are random.
The idea behind using multiple measurements to reduce precision error is that the measurements are correlated in a relevant way while the error is not. The measurements don’t need to be of the very same thing (in other words, perfectly correlated) in order to work effectively. And global mean temperature is precisely the sort of aggregate variable that would be associated with the correlated parts of temperature measurements at thousands of world-wide locations.
So uniformly over the course of a year (to negate the annual systemic biases due to the changing of the seasons), you’re speaking of thousands of sites and somewhere around a thousand temperature observations per site (say 2 to 4 temperature takings per day). Second decimal place digit precision in the face of several million observations (each at the +- 0.5 C level) isn’t unrealistic.
Basically, you would have reduced the random error to one over the square root of the number of samples taken, meaning variation due to random error is now 0.5 C / sqrt( several million). That’s below three significant digits in effect now.
The real problem at that point are systemic biases that have nothing to do with the random error.
But, the problem, as MfK points out, is that temperature is an intensive variable, and an average of temperatures is thus not a rigorously defined physical variable.
E.g., suppose you have a flask of water at 20 degC, and another of ethanol at 30 degC. Which one is in a higher energy state? The answer: the flask of water, since its specific heat capacity is 1.74X that of ethanol.
If you mix the two together, neglecting the heat of the flasks themselves, what is the final temperature? 25 degC? No, it is 23.7 degC.
The surface of the Earth is composed of wet and dry regions, with very different heat capacities. The heat inventory of the Earth will generally be in the neighborhood of proportionality to the average temperature, but when you are working with anomalies on the order of tenths of a degree, it matters a great deal to do a precise calculation.
But, the problem, as MfK points out, is that temperature is an intensive variable, and an average of temperatures is thus not a rigorously defined physical variable.
I disagree. First, we can make point measurements of temperature (follows from the definition of intensive variables) and then average that over the surface area of the Earth (fairly standard multi-variable calculus). So if sampling temperature was somehow mathematically rigorous, you’d be set even though temperature is intensive.
The only lack of rigor possible is in the measurement of said temperatures which can be biased by a variety of things and the attempts to adjust the resulting temperature data for a variety of things, such as filtering out the effects of albedo changes from human land use, the non-uniform spatial distribution of temperature sampling, or changes in temperature measurements and locations.
However, that lack of rigor does at the small temperature variations studied allow for a lot of error, wishful thinking, and deception.
I think you are missing the point. What does average temperature physically represent?
I am not a temperature denier — it is indeed getting warmer. I am, however, a (luke-partial pressure instead of lukewarm?) CO2 denier.
We know for a certainty to the extent that anything can be certain that the CO2 level in the atmosphere is increasing. We also know this represents net emission of CO2 into the atmosphere (excess of sources over sinks) because there is nothing in the atmosphere that disassociates or otherwise destroys CO2.
In the absence of any natural increase in CO2 emissions, we know only half of the human CO2 emissions are ending up in the atmosphere and the other half ends up in sinks. Of the half going into sinks, half of that is being absorbed in the inorganic ocean-water carbonates and the remaining half is being absorbed by photosynthetic plant life. We know of this split because of highly accurate chemical-analytical techniques for measuring atmospheric oxygen concentration changes resulting from burning fossil fuels. The amount of CO2 going into the oceans is in response to the non-linear chemical-equilibrium Revelle Buffer mechanism along with some assumptions regarding mixing of surface with deep ocean waters. The assumptions explaining the net ocean absorption in response to increase in atmospheric CO2 are consistent with carbon isotope measurements.
We also know that plant life absorbs much more CO2 than is emitted by humans, but much of what plants absorb gets emitted back into the atmosphere when the leaves and dead tree trunks fall and rot. There is evidence of a strong sensitivity of the rate of rotting and hence CO2 re-emission in response to temperature — see the Wood-for-Trees site http://woodfortrees.org/. This evidence is in the form of year-to-year variation in the net emission (same as increase in atmospheric CO2) that is of roughly the same scale as the human emission that is judged to be more constant as human economic and industrial activity doesn’t fluctuate nearly that much, recessions notwithstanding.
Were there no temperature-driven (i.e. thermally stimulated) natural CO2 emissions, the Standard Model of half of human CO2 goes into the atmosphere, half goes into sinks, half-of-that-half goes into the ocean, the other half-of-that-half is the net increase of carbon in soils put there by way of plant life, with nearly all of the increase in atmospheric CO2 being the fault of us humans, that model would be quite accurate.
With the Wood-for-Trees data, something must account for the fluctuations strongly suggesting a natural source of thermally-stimulated CO2. That means that the warmer it gets, the more CO2 is driven out of rotting dead plants. Are we all going to die from a Runaway Greenhouse? It has been getting warmer, so if there is dangerous positive feedback of such thermally driven CO2 emission, it has to be balance by a safety negative feedback in the form of more vigorous plant growth in response to the increase in atmospheric CO2 — this has to be to explain why the CO2 levels haven’t already run away.
Pieter Tans, who is the National Oceanic and Atmospheric (NOAA) Carbon Cycle guru/maven, knows about the Wood for Trees data, and he “explains it away” by arguing that the effect is from rotting leaves in the tropics, which don’t accumulate because they all rot away in a couple years time, hence the natural thermally driven CO2 emission “only operates over short time scales” and the Standard Model of humans are at fault is safely maintained.
This evening I have devised a counter argument to this. Most of the leaves that fall from the trees rot and return the CO2 captured by plants back into the air. But from the Carbon Balance, we know that there is a small, net sink of carbon into the soil that is a full quarter of the human emissions in the Standard Model.
In the absence of any physical model to be proposed by Dr. Tans, let’s suppose that any leaf that falls rots away at a rate characterized by a decaying exponential’s time-constant tau. Until a leaf completely rots away, lets assume that a much smaller part of its carbon gets sequestered into the soil at a much smaller rate, but the fraction of the carbon sequestered that way is proportional to how long the leaf sets there before it goes poof! back into the air. The model is probably more complicated in terms of leaf piles and the dank, yucky stuff at the bottom of leaf piles seeping into the ground, but you get the idea.
So one leaf decays according to leaf-unit*exp(-t/tau). Its total exposure to the ground is integral_0_infinity leaf_unit*exp(-t/tau) that evaluates to leaf_unit*tau. In other words the carbon sequestered is some small fraction of leaf_unit*tau, but it is proportional to time-constant tau. Suppose tau (for small changes) varies with temperature. This means that the average carbon sequestered is the average of tau over the years, and this doesn’t matter if tau is only 2-3 years as Tans claims.
Hence unless someone comes with a better leaf decay model, the thermally stimulated emission occurs over all time scales, not just the “couple years” claimed by Pieter Tans. It is getting warmer, and this warmth is shortening tau and reducing the rate at which the soils sequester carbon, which means, according to my calculations, that the thermally-driven contribution to the increase in atmospheric CO2 everyone worries about is equal to the human contribution to that increase.
If the thermally stimulated CO2 emission in response to warming is that strong, and if our measures of atmospheric CO2 are to be trusted, it means that plant uptake in response to increased CO2 has to be stronger than thought, providing a vigorous negative feedback.
Bart disagrees with me on some points, saying I am not taking into account upwelling from the deep ocean. But whether you believe me or Bart, the Standard Model that blames humans for all of the increase in atmospheric CO2 is off by a factor 2 or maybe more. So yes, it has been warming, and that warming has contributed to at least half of the increase in CO2 observed already, and if warming is driving that much CO2, plant uptake has to be vigorous to account for the levels of CO2 already seen.
As to the remarks that I am not to be taken seriously, I am serious about my observations regarding people of different races and incomes living together, and I am serious about my study of the Carbon Cycle.
As to my engaging in hyperbole or my defending President Trump for using bad words, I defend everyone using bad words, including someone who uttered (in electronic print) bad words as I watched in dismay and a person who has been in a pile of trouble ever since, even if the Wheels of Justice clank around to come up with a favorable resolution, which I fervently hope they do, Rand.
The thing is, the rate of change of atmospheric CO2 matches the temperature over the long term:
http://woodfortrees.org/plot/esrl-co2/derivative/mean:24/plot/hadcrut4sh/offset:0.45/scale:0.22/from:1958
It matches both the long term trend, and the short term variation, explaining virtually the entire series in the era since we have had reliable CO2 measurements starting in about 1958.
Human inputs need not apply. They are obviously being dealt with in a ratio of much greater than 1:1.
I explained my conjecture for what is driving it here:
http://edberry.com/blog/ed-berry/why-our-co2-emissions-do-not-increase-atmosphere-co2/#comment-10993
The key part is, it is a flow problem. In rather a bit of irony, the mechanism to which I refer is analogous to the GHE itself.
The GHE says that, if the output radiated energy from the surface is impeded, the energy must pool at the surface until the obstacle can be surmounted by other means, those other means being an increase in temperature at the surface (though, in a convective environment, there are actually other means to surmount the obstacle).
Just so, an impedance to downwelling transport of CO2 due to rising surface temperature must result in pooling of CO2 at the surface interface until the obstacle can be surmounted by other means, those means being the very long term equilibration of temperatures all along the downwelling and upwelling ocean current paths.
Maybe I’ve got the correct mechanism, and maybe I don’t. But, what is irrefutable is that the data show a temperature to rate of change relationship, and that relationship contradicts significant dependence on human combustion of fossil fuels.
It is very likely that we are on the cusp of the downcycle of the ~65 year cyclic phenomenon observable in temperature records. You can readily see this cycle in the detrended data here:
http://woodfortrees.org/plot/hadcrut4gl/from:1900/detrend:0.75
There was an anomalous blip in the pattern over the past few years due to the monster El Nino, but I do expect reversion to the pattern in the near future, and it is said there is a monster La Nina brewing.
If that reversion occurs, and temperatures drop, then the rate of change of atmospheric CO2 also will drop. CO2 emissions are not likely to drop, if at all, so it should be a good test of the relationship that may force it to be taken seriously by those who currently have closed their minds to any possibility that human activities are not responsible.
Glenn writes, “Speaking as a professional software developer…” Oh, that’s rich. When I was in school, the computer science types were the guys who wanted to major in engineering but couldn’t handle the math. Please…
Glenn writes, “Which, sad to say, is even more common in people who aren’t professionals (accountants building spreadsheets, economists building financial models, etc.)….”
So you think you’re more ‘professional’ than these other people how? Geez…
Do you have a substantive critique of Glenn’s comment?
Naturally not.
Or it would have been provided, one assumes.
(I especially like how he talks about not having enough math, but Glenn’s entire critique was about software as such and debugging, which does not require even algebra to evaluate.
“You don’t look for bugs in code that gives you the answers you want” is a truth about human nature [unless one tries very hard to fight it] that requires not one whit of mathematical knowledge to notice and point out.)
Correct on all counts.
Couldn’t handle the Physics perhaps? Computer Science courses have a lot of Math classes. In fact even more than someone with an actual Physics degree usually have. Then again I was trained as a Computer Engineer instead so I had both those and the Physics classes. I was made to learn Kinematics, Electromagnetism, Thermodynamics, Relativity, Early Quantum Mechanics, on the Physics classes. In the Maths classes I had lovely lessons in Abstract Algebra (oh yum), Logic (inc. Proof Theory and Higher-Order Logic) besides all the prerequisite Maths classes for the things I we need to use in Physics (i.e. Calculus and Statistics). As an Undergrad.
It felt mostly like a waste of time back then and it actually was. Really.
My comment at the top level is, as usual, “when all your errors are in one direction, they’re not even accurate errors”.
Real, natural errors averaged over time should be in both directions, sometimes too cold, sometimes too hot; errors only in one direction indicate the underlying “correction” or modeling is tainting the outcome noticeably.
There are contexts in which it is best to force your errors in one direction or another, to prevent certain failure states.
But basic science to establish a factual view of the world is very much not the place to force errors to one side like that.
In fact, if we assume more AGW than I expect (I expect not-zero, but relatively low magnitude, currently – “not a crisis”), forcing the errors is a negative in terms of outcome, because it’s making the actual warming look sketchy.
Honesty and rigorous science produce trust in the results. Putting a finger on the scale makes people distrust you even when you’re right, overall.
The modelers usually come back with a claim that the “adjustments” have actually lowered the long term trend. CliSci worships trend lines. They think there is something magical about drawing a line through data that minimizes the mean square distance between the points.
But, whether the trend changed or not is immaterial. What they have done is eliminate features that do not correlate with the CO2 data, and hence have altered the data to agree more closely with their premise that CO2 is driving temperatures.
https://pbs.twimg.com/media/CvcaBlAWgAESL4n.jpg
The questions relating to AGW are:
1) Are temperatures rising?
2) Are we driving the change?
3) Are rising temperatures good or bad?
4) If we are, and it is bad, then what can we do about it?
We argue so much about items 1 & 2 that we really don’t get much into 3 & 4. All kinds of scare tactics are used to answer #3 as “bad”, but they don’t hold up well to scrutiny. But, it is not the central battleground, and the claim of everything bad gets relatively little pushback.
Question #4 gets hardly any pushback at all. It is taken as a given that solar and wind power are good for the environment. In fact, they are horrendous. Solar power requires manufacture using caustic chemicals and produces monumental waste. The takeover of vast swathes of land required to generate significant power destroys habitat, and creates a heat island effect on steroids.
Wind power has most of the detractions of solar power, plus it slices and dices ecologically crucial raptors and carrion fowl, as well as insect-controlling bats.
The “cure” is worse than the purported disease. The whole thing is a scientific fiasco and an unfolding ecological disaster, driven by a combined front of neurotic and cynical players.
Good post, Bart. I’m quoting Rand poorly, but he’s said something like if climate change was really about climate change, we’d be seriously pursuing nuclear power. But it isn’t, so we’re not.
Bart:
“The “cure” is worse than the purported disease. The whole thing is a scientific fiasco and an unfolding ecological disaster, driven by a combined front of neurotic and cynical players.”
Don’t get me started on lead free solder. I was hearing “nuke Brussels” from my wife when we were forced to use it for one product.