Eat that GW believers!
One problem with the models is that the water vapor feedback isn't modeled on first principles, it's too complicated and nobody really has a good grasp on the cloud formation factor. So they use a parametric model which hasn't been properly validated.
As for models predicting a strong positive feedback to rising temperatures, I know enough about feedback loops to know that Earth's temperature doesn't behave like that. Add in other elements of the predictions falsified by direct observation, and umpteen details not well accounted for, and I don't trust the models.
As a software engineer, I know better than to trust any computer model that hasn't been validated over the parameters of interest. I know too many ways they can go wrong.
As for models predicting a strong positive feedback to rising temperatures, I know enough about feedback loops to know that Earth's temperature doesn't behave like that. Add in other elements of the predictions falsified by direct observation, and umpteen details not well accounted for, and I don't trust the models.
As a software engineer, I know better than to trust any computer model that hasn't been validated over the parameters of interest. I know too many ways they can go wrong.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
The validation is increasingly on the mark, did you read the link I posted?hanelyp wrote:One problem with the models is that the water vapor feedback isn't modeled on first principles, it's too complicated and nobody really has a good grasp on the cloud formation factor. So they use a parametric model which hasn't been properly validated.
Science is what we have learned about how not to fool ourselves about the way the world is.
Which of course is why they had to hide the decline and why 10 years of flat temperatures were not predicted.Josh Cryer wrote:The validation is increasingly on the mark, did you read the link I posted?hanelyp wrote:One problem with the models is that the water vapor feedback isn't modeled on first principles, it's too complicated and nobody really has a good grasp on the cloud formation factor. So they use a parametric model which hasn't been properly validated.
So yes. A validation has occurred. Not necessarily in favor of the models.
On top or that Lorenz showed why modeling climate is hopeless. He did that some 40 years ago. But like perpetual motion - hope springs eternal only to be confounded by change.
Of course you can on longer scales discover (possibly) the strange attractors in the system. However, there is no possible way of determining when the jump from one attractor to another will occur.The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems.
An early pioneer of the theory was Edward Lorenz whose interest in chaos came about accidentally through his work on weather prediction in 1961.[21] Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again and to save time he started the simulation in the middle of its course. He was able to do this by entering a printout of the data corresponding to conditions in the middle of his simulation which he had calculated last time.
To his surprise the weather that the machine began to predict was completely different from the weather calculated before. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 was printed as 0.506. This difference is tiny and the consensus at the time would have been that it should have had practically no effect. However Lorenz had discovered that small changes in initial conditions produced large changes in the long-term outcome.[22] Lorenz's discovery, which gave its name to Lorenz attractors, proved that meteorology could not reasonably predict weather beyond a weekly period (at most).
http://en.wikipedia.org/wiki/Chaos_theory
Climate is an indeterminacy problem. It is ruled by chance.
Engineering is the art of making what you want from what you can get at a profit.
I stole it from you.Skipjack wrote:You are a bit late Msimon. I posted that already a page earlier ;)
Still I am totally with you there. CO2 is plant food. More plant food equals more trees... More trees equals less plant food... Dont we all just love trees?
Engineering is the art of making what you want from what you can get at a profit.
Dude. The bit I bolded is a flat out lie. The grid cells are 100 Km across. You can not do a detailed physical model of anything climate or weather on that scale.Josh Cryer wrote:I live in Colorado, the Aspens were dying from the beetle outbreak. Hopefully this helps them.MSimon wrote:Josh,
The luke warmists do not doubt the CO2 effect. It is the magnitude. Is it amplified by water vapor by 1.5 to 4 (they sure have that one nailed down) or by .4 to .6?
Me? I like trees:
http://www.sciencedaily.com/releases/20 ... 092445.htm
I wasn't aware the water vapor feedback was so "uncertain": http://geotest.tamu.edu/userfiles/216/dessler09.pdf
The thing about David Archer's lectures is that he points out that all climate models are unique, using basic physical representations of nature, basic low level math. The reason there are so many models is so that the scientists can compare results. What is remarkable is that though the models are different (since no model can be itself perfect), they still have similar output. Again using basic physical constants of nature.
Clouds are phenomenon of hundreds of meters and minute time scales or less. The only way around that is you assign some computable parameter. i.e. temp is x, temp was y, humidity was q, is r, etc and then compute the cloud cover in the grid cell based on some empirical formula.
It may be workable. It is not physical.
And judging by results it is not workable.
The same is done with solar UV.
And predicting volcanic eruptions 50 years in advance? Line up for your Nobel dude. You earned it.
And the Svensmark work is not yet in the models. OK. The effect may be small. But in a chaotic system NOTHING IS NEGLIGIBLE. Which then gets you predicting GCR flux. And solar magnetic fields. Fifty years in advance. You just earned another Nobel. You are good dude. VERY GOOD.
Did I mention that a very large meteor impact might disturb your calculations?
And how about all that water raining on us from space? I never heard about that in the models. Because - you know - any simplification invalidates the models for long term prediction.
If the models are looking good it is for one or more reasons:
1. Chance
2. Backcasting (models tuned to history)
If it is primarily #2 and the parameters are tuned wrong then the chance of future truth is small. The farther into the future the smaller.
Because every cycle of the program is the input to the next cycle. As Lorenz showed. The problem does not anneal.
Engineering is the art of making what you want from what you can get at a profit.
And Josh,
From your link:
http://geotest.tamu.edu/userfiles/216/dessler09.pdf
From your link:
http://geotest.tamu.edu/userfiles/216/dessler09.pdf
Expectations are not validations. Except in Climate Science. Hide the decline.Given these considerations, there are good reasons to expect global climate models to accurately simulate the water vapor feedback
Engineering is the art of making what you want from what you can get at a profit.
For the same reason we do not take computer simulations of plasmas as gospel neither are simulations of weather any good.
The problems are fundamentally too complex.
The programs can give us hints. But telling us what the temperature will be in 100 years to better than 1 part in 300 (Kelvin scale) is not in the realm of possibility. Especially when you start with data that is +/- 1 C or worse. Or measure ocean temperatures with buckets by a sailor on watch.
I do think we can fix the problems of Climate Science with an:
AUDIT BY SKEPTICS
http://www.climateaudit.org/pdf/others/ ... Report.pdf
The Models have NEVER been validated.
1. Source Code
2. Source Data
3. Results of runs.
Suppose they run the model(s) 10,000 times and once they get something reasonably close to the past. Is that robust?
Then you do small changes in the data and see how that affects the results.
So I'm with you dude.
VALIDATE THE MODELS
The problems are fundamentally too complex.
The programs can give us hints. But telling us what the temperature will be in 100 years to better than 1 part in 300 (Kelvin scale) is not in the realm of possibility. Especially when you start with data that is +/- 1 C or worse. Or measure ocean temperatures with buckets by a sailor on watch.
I do think we can fix the problems of Climate Science with an:
AUDIT BY SKEPTICS
http://www.climateaudit.org/pdf/others/ ... Report.pdf
The Models have NEVER been validated.
1. Source Code
2. Source Data
3. Results of runs.
Suppose they run the model(s) 10,000 times and once they get something reasonably close to the past. Is that robust?
Then you do small changes in the data and see how that affects the results.
So I'm with you dude.
VALIDATE THE MODELS
Engineering is the art of making what you want from what you can get at a profit.
In other words on the relevant time scales a linear approximation works as well.kcdodd wrote:My point is that pre ~1945 the CO2 was going as t^2 (constant acceleration), and since then its going as t^9 (t^7 acceleration).
Further: we have to go deeper into the measurements and reporting to find out just how deep the rot is. How do we know the CO2 data hasn't been selected or corrupted?
Engineering is the art of making what you want from what you can get at a profit.
I have tried but I remain unconvinced.Josh Cryer wrote:The validation is increasingly on the mark, did you read the link I posted?hanelyp wrote:One problem with the models is that the water vapor feedback isn't modeled on first principles, it's too complicated and nobody really has a good grasp on the cloud formation factor. So they use a parametric model which hasn't been properly validated.
My undestanding of article is that you can reliably model water wapor feedback based only on relative increases of humidity, which is what all models converge to in the end. Then greenhouse effects and direct thermal radiation effects are considered. No reference to albedo changes and other possible effects.
Then it reaserts that models require strong positive feedback, because of (14), which is study from 1999 - at that time, not even PDO was considered by models. Other references are either old (which somewhat contradicts "recent advances") or authored by Dressler.
One way or another, we will know the whole story of AGW in 5-10 years. "Natural variability" theory suggests 20 years of cooling based on PDO cycles. If that becomes true, the whole theory of strong positive feedback will be quickly forgotten...
BTW, the most trivial "global climate" model can be demonstrated with a very simple graph:
http://www.climate-movie.com/wordpress/ ... lide53.jpg
We will see soon if that works better or worse than model projections...
The paper given in this link seems to imply that they can successfully predict climate observations (or do they just make the fudge factors fit?)Despite these advances, observational evidence
is crucial to determine whether models
really capture the important aspects of the
water vapor feedback. Such evidence is now
available from satellite observations of the
response of atmospheric humidity (and its
impacts on planetary radiation) to a number of
climate variations. Observations during the
seasonal cycle, the El Niño cycle, the sudden
cooling after the 1991 eruption of Mount Pinatubo,
and the gradual warming over recent
decades all show atmospheric humidity changing
in ways consistent with those predicted by
global climate models, implying a strong and
positive water vapor feedback (9–13).
http://geotest.tamu.edu/userfiles/216/dessler09.pdf
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
MSimon, "hide the decline" comes from the oft-quoted 1998 Nature paper describing the decline in the Briffa proxy tree ring data. What happened was that there was an obvious divergence post-1960 which did not meet the measured results.
Now, the proxy data before that was very accurate, indeed, you can correlate this with other data sets and it is extremely accurate. And this is if you go by temperature measurements with good old mercury level devices.
So the scientists have a choice, 1) say that the proxy data going forward post-1960 is accurate and that *all temperature data and all other proxies and all other data is inaccurate*, which destroys all previous proxy data, mind you, or 2) say that the decline is due to an unknown anomaly that has yet to be discovered.
Which is the scientific thing to do? You report on the data, that's all. Conclusions are incidental. Would you really have them throw out *all other data* over this one divergence or would you rather they actually do science?
I have no idea what you mean by 10 year flat temperatures, given that the last decade has had most of the hottest years on record, going by every temperature data set that exists. And this is in spite of the attempts to discredit NOAA temperature reading stations because they are placed in "crappy locations" (hint, even the best stations are representative of the long term trend).
Models can't magically model every molecule in the atmosphere, so you have to build them to cover whole areas. When you do the black body formula as a first year college student, to derive the temperature of the Earth (even wikipedia has the equation, that any one of you could understand), it is a simplified generalization. What is important is that different scientists produced different models based on known physical constants and derived similar results. It doesn't mean the models are perfect, or absolute, or magically representative of reality, it means the scientists are on the track to more fully understanding the nature of the system. They will never, ever be able to have absolute control over the predictions, that's nature for you.
Note that you have ranted at me from several different angles that I simply am not prepared to "refute" as this is what we call a "straw man." I worded what I wrote carefully because it is clear that a rational discussion about the data is very difficult with people who buy in to every single cockeyed theory that exists. I said "increasingly validated" for a reason. Nothing in science is "absolutely, correctly, validated." Indeed, you might say, "We have observed starlight bend according to relativity, Einstein was validated." It'd be a true statement for that point in time. However, it would be more correct to say "It appears that Einstein's relativity is valid given these observations."
How models become validated is not by "releasing source code" that wouldn't even run on anyones computers because they use massively paralell computers to do the computations (leading to more bemoaning that the programs are not user friendly or understandable). It is by scientists around the world building their own unique models and comparing them to one another. You cannot validate relativity by looking at the pictures of starlight being bent before and after a solar eclipse (indeed, the scientist who made the images could have fabricated them! This is precisely why it is difficult to frick up the scientific process), you must devise your own experiment which has similar results. Indeed, the more diverse the experiment, the more valid the data becomes, which is precisely why proxy data is relevant to the CO2 increases. It's one thing to just measure temperature using thermometers, of which reliable data only goes back a century, it's another thing entirely to devise proxies that can go back further and attempt to assess global climate over centuries beyond. Seriously, the level of frick-up necessary to "inflate the likelihood" of AGW is mindboggling insane.
Note, all models are accompanied by a nice little paper that explains every aspect of the methodology utilized. If you wanted to, and you had the ability, you could code up an identical model, and derive similar results. The paper is where the facts are, not in computer code that is going to be fallible and even error prone (making sure a piece of data is "correct" is a nearly impossible task). Yeah, scary, isn't it. I just admitted that the models could have errors. Whoa. Fortunately science doesn't depend on perfection, it depends upon reproducibility, and empirical observation. No where do the models ignore these two facts. Source code can be acquired if you are a legitimate researcher and not someone who is going to get it, release it to the wild, and cry over the bugs that are invariably there (rather than actually fix them). I am sure it will happen soon enough due to the calls for transparency, and frankly it should have happened sooner, at least when ClimatePrediction.net came online (I remember calls for source code back then, too, we got the source code for protein folding, we should have it for climate modeling). With open code the bugs will be fixed in quick order, assuming the programmers know how to read a research paper and fix whatever errors are in the model that don't fit with the paper(s) itself (themselves). It is shocking that the so called skeptics haven't actually attempted to build their own models using known data, since it would quite readily, if they actually believed their own tripe, prove them correct and really shake up the community as a whole.
Luzr, the paper talks specifically about water vapor, it doesn't concern itself with anything else. The only truly conclusive thing you can garner is that water vapor has a positive feedback of at least 1. This in stark contrast to the lowballed numbers MSimon was spouting.
jmc, having read the model papers (indeed, MSimon links a review of them, presumably thinking it debunks them due to some sort of bizarre misreading), none of them fudge the data. The models are raw math, and indeed, the 2001 IPCC model is still predictive to this day, quite accurate given how simplified it was. (Hell, Hansen's original model was freaking ridiculously accurate given that warming simply wasn't distinct from noise until the 90s, something Hansen predicted would happen and which I believe is one of the most amazing predictions by a scientist ever. His model was way simple, it was crazy that it even worked, imo. We're talking about 1980 computers, no empirical evidence except for IR forcing of CO2, and a range of 1-2C off. Crazy.)
Don't buy this crap that our models can't predict global climate. They can't predict if it's going to rain or not in 42 days, but they darn sure can predict whether or not temperatures are going to rise. It's a very narrow scope that they're going after.
Now, the proxy data before that was very accurate, indeed, you can correlate this with other data sets and it is extremely accurate. And this is if you go by temperature measurements with good old mercury level devices.
So the scientists have a choice, 1) say that the proxy data going forward post-1960 is accurate and that *all temperature data and all other proxies and all other data is inaccurate*, which destroys all previous proxy data, mind you, or 2) say that the decline is due to an unknown anomaly that has yet to be discovered.
Which is the scientific thing to do? You report on the data, that's all. Conclusions are incidental. Would you really have them throw out *all other data* over this one divergence or would you rather they actually do science?
I have no idea what you mean by 10 year flat temperatures, given that the last decade has had most of the hottest years on record, going by every temperature data set that exists. And this is in spite of the attempts to discredit NOAA temperature reading stations because they are placed in "crappy locations" (hint, even the best stations are representative of the long term trend).
Models can't magically model every molecule in the atmosphere, so you have to build them to cover whole areas. When you do the black body formula as a first year college student, to derive the temperature of the Earth (even wikipedia has the equation, that any one of you could understand), it is a simplified generalization. What is important is that different scientists produced different models based on known physical constants and derived similar results. It doesn't mean the models are perfect, or absolute, or magically representative of reality, it means the scientists are on the track to more fully understanding the nature of the system. They will never, ever be able to have absolute control over the predictions, that's nature for you.
Note that you have ranted at me from several different angles that I simply am not prepared to "refute" as this is what we call a "straw man." I worded what I wrote carefully because it is clear that a rational discussion about the data is very difficult with people who buy in to every single cockeyed theory that exists. I said "increasingly validated" for a reason. Nothing in science is "absolutely, correctly, validated." Indeed, you might say, "We have observed starlight bend according to relativity, Einstein was validated." It'd be a true statement for that point in time. However, it would be more correct to say "It appears that Einstein's relativity is valid given these observations."
How models become validated is not by "releasing source code" that wouldn't even run on anyones computers because they use massively paralell computers to do the computations (leading to more bemoaning that the programs are not user friendly or understandable). It is by scientists around the world building their own unique models and comparing them to one another. You cannot validate relativity by looking at the pictures of starlight being bent before and after a solar eclipse (indeed, the scientist who made the images could have fabricated them! This is precisely why it is difficult to frick up the scientific process), you must devise your own experiment which has similar results. Indeed, the more diverse the experiment, the more valid the data becomes, which is precisely why proxy data is relevant to the CO2 increases. It's one thing to just measure temperature using thermometers, of which reliable data only goes back a century, it's another thing entirely to devise proxies that can go back further and attempt to assess global climate over centuries beyond. Seriously, the level of frick-up necessary to "inflate the likelihood" of AGW is mindboggling insane.
Note, all models are accompanied by a nice little paper that explains every aspect of the methodology utilized. If you wanted to, and you had the ability, you could code up an identical model, and derive similar results. The paper is where the facts are, not in computer code that is going to be fallible and even error prone (making sure a piece of data is "correct" is a nearly impossible task). Yeah, scary, isn't it. I just admitted that the models could have errors. Whoa. Fortunately science doesn't depend on perfection, it depends upon reproducibility, and empirical observation. No where do the models ignore these two facts. Source code can be acquired if you are a legitimate researcher and not someone who is going to get it, release it to the wild, and cry over the bugs that are invariably there (rather than actually fix them). I am sure it will happen soon enough due to the calls for transparency, and frankly it should have happened sooner, at least when ClimatePrediction.net came online (I remember calls for source code back then, too, we got the source code for protein folding, we should have it for climate modeling). With open code the bugs will be fixed in quick order, assuming the programmers know how to read a research paper and fix whatever errors are in the model that don't fit with the paper(s) itself (themselves). It is shocking that the so called skeptics haven't actually attempted to build their own models using known data, since it would quite readily, if they actually believed their own tripe, prove them correct and really shake up the community as a whole.
Luzr, the paper talks specifically about water vapor, it doesn't concern itself with anything else. The only truly conclusive thing you can garner is that water vapor has a positive feedback of at least 1. This in stark contrast to the lowballed numbers MSimon was spouting.
jmc, having read the model papers (indeed, MSimon links a review of them, presumably thinking it debunks them due to some sort of bizarre misreading), none of them fudge the data. The models are raw math, and indeed, the 2001 IPCC model is still predictive to this day, quite accurate given how simplified it was. (Hell, Hansen's original model was freaking ridiculously accurate given that warming simply wasn't distinct from noise until the 90s, something Hansen predicted would happen and which I believe is one of the most amazing predictions by a scientist ever. His model was way simple, it was crazy that it even worked, imo. We're talking about 1980 computers, no empirical evidence except for IR forcing of CO2, and a range of 1-2C off. Crazy.)
Don't buy this crap that our models can't predict global climate. They can't predict if it's going to rain or not in 42 days, but they darn sure can predict whether or not temperatures are going to rise. It's a very narrow scope that they're going after.
Science is what we have learned about how not to fool ourselves about the way the world is.
Higher-order insolubility
The math and physics of the models is laughable, like making wind-tunnel models of jet planes out of TinkerToys.
http://arxiv.org/pdf/0707.1161v4
http://arxiv.org/pdf/0707.1161v4
In case of partial differential equations more than the equations themselves the boundary conditions determine the solutions. There are so many different transfer phenomena, radiative transfer, heat transfer, momentum transfer, mass transfer, energy transfer, etc. and many types of interfaces, static or moving, between solids, fluids, gases, plasmas, etc. for which there does not exist an applicable theory, such that one even cannot write down the boundary conditions [176, 177].
In the "approximated" discretized equations artificial unphysical boundary conditions are introduced, in order to prevent running the system into unphysical states. Such a "calculation", which yields an arbitrary result, is no calculation in the sense of physics, and hence, in the sense of science. There is no reason to believe that global climatologists do not know these fundamental scientific facts. Nevertheless, in their summaries for policymakers, global climatologists claim that they can compute the influence of carbon dioxide on the climates [of planets].

Help Keep the Planet Green! Maximize your CO2 and CH4 Output!
Global Warming = More Life. Global Cooling = More Death.
Global Warming = More Life. Global Cooling = More Death.
Phil Blait, from BadAstronomy.com, about the Global Warming emails
http://blogs.discovermagazine.com/badas ... -followup/
http://blogs.discovermagazine.com/badas ... -followup/
The comments in that post have been interesting. Most of them — and there are a lot — completely missed the point I was making, which isn’t terribly surprising. I called this whole thing a non-event because it’s manufactured drama. It is not the smoking gun, it doesn’t discredit climatological research showing the Earth is warming, and it doesn’t show that scientists are some sort of priesthood guarding their domain. As Real Climate points out, it’s not what’s in the files that’s interesting, it’s what’s not in them: nothing about huge conspiracies, nothing about this all being faked. If this is such damning evidence, where’s that evidence?
What these files do show is scientists trying to deal with data, software, and science, all the while also trying to figure out what to do with attacks on their work that are largely ideologically driven. I don’t think they handled that all that well, and that doesn’t surprise me. They’re scientists, not wonks. Of course, if you look at the files from the point of view of giant conspiracies it seems very racy, and clearly a lot of the commenters on my original post feel that way. But to reiterate, this does not call into question the reality of global warming in general.
To further show that, look at some of the things being said. Many people — and some who should really know better — are saying Phil Jones, the head of the group whose files were hacked, has been "fired". That’s simply not true. He has stepped aside, temporarily, while the situation is being investigated. The news reports on this were very clear. So why would someone say he was fired? I submit it’s because they are trying to spin this situation up into more than it is.
Again, as I thought I made clear in the earlier post, the methods being used by the scientists in question don’t look to me like they were faking data. In software it’s common to test out different methods, see what works, and what doesn’t. A piece of software I wrote for working with Hubble data went through hundreds of iterations and edits before going live (and was updated quite a bit after that as well). Software used to analyze data is a little like science itself: it changes as you learn more and find better ways to do things. If you found an early version of my code you might wonder if I was faking the data too! The examples of code in the hacked files may have been early versions, or had some estimations (called, not always accurately, fudge factors) used in place of real numbers… the thing is, we don’t know. Drawing conclusions of widespread scientific fraud from what we’ve seen is ridiculous.
As far as the scientists’ attitudes go, much hay has been made of that as well. But I wonder. Imagine you’ve dedicated your life to some scientific pursuit. You do it because you love it, because you want to make the world a better place, and because you can see the physics beneath the surface, weaving the tapestry of reality, guiding the ebb and flow of forces both subtle and gross. Then you find that people start attacking you with flimsy evidence, politically motivated vitriol, and even elected officials say that what you are doing is a "hoax". How do you react?
The circling of wagons and questions of what to do and how to deal with the situation don’t surprise me at all. And again, without the context of those emails we don’t know what the real story is. You can claim scientific fraud and obstructionism all you want, but you don’t know, and I don’t either. I actually agree that this should be investigated, but I hope they look at all the evidence, and don’t quote mine and cherry pick as so many people have done.
People say I’m biased, which may be a fair cop. I am biased: to reality. If we had real evidence that global warming was not occurring, then I’d pay attention. I’ve looked at the so-called "other side", and found their claims lacking. Science is all about finding supporting and falsifying evidence. When enough data piles up that shows previous thinking is wrong, then scientists change their mind. Look up "dark energy" if you have doubts about that. In this, I am in agreement with the American Meteorological Society, Nature magazine, and Scientific American.
Science is necessarily conservative. Once something is established as being an accepted model/theory/law, then it becomes the standard paradigm until it is shown to be flawed in a significant way. You may not like it, but in modern climatology, global warming is accepted as the standard. It’s not up to me or anyone to prove it right at this point, it’s up to scientists to show it’s wrong. To do that you’ll need a lot of really good evidence, and from what I have seen and read that evidence is not there. Maybe it’s fair to say not yet there, but in reality it may not be there at all.
This has become so politicized it’s hard to know what’s right and what’s wrong. I personally would be thrilled to find out the Earth isn’t warming up. I’d like my daughter to grow up on a planet that isn’t on the fast track to environmental disaster. But I have no stake in the claim scientifically either way; I don’t cling to AGW because of political bent or any ideology. I think global warming is real because of the overwhelming evidence pointing that way.
I’ll note that some people are still upset by my use of the term deniers. Again, to be clear: a skeptic is someone who uses evidence and logic to reach a conclusion. A denialist is someone who will say or do anything to deny an issue. I stand by my definition. There are actual global warming skeptics out there — and I would not only support their efforts but praise them — but what I see on the web and in the comments overwhelmingly is denial, not skepticism.
Joshua Rosenau at Thoughts from Kansas has a lengthy post on these hacked files, which is well worth reading. He is more adamant about the icky nature of the data theft than I am — I do see where it’s wrong, but also understand that motivation is an issue, as I point out in my original post (after all, what one person calls a thieving hacker another would call a whistleblower)– but we largely agree on everything else.
Also, as predicted the comments in my original post accuse me of all sorts of horrid things, which I take in stride. I maintain that the vast majority of what I have seen claimed by the global warming deniers is simply taken out of context. Programmers and scientists complaining about software and data? Quelle horreur! Wow, we never do that.
Pbbbbt.
In conclusion: I called this a non-event because it has no real impact on global warming science or our understanding of it. Of course it has a huge impact, politically. But that’s because the ideologues out there have seized on this and made as much noise as they can, so in that sense it is an issue — an issue of how political science has become, how easy it is to disrupt the process, and the effect this has had on the scientists themselves. This issue won’t go away any time soon, but we need to focus on the signal, not the noise.
-
- Posts: 815
- Joined: Thu Nov 13, 2008 4:03 pm
- Location: UK
You can fit a linear approximation to any curve at all. If you use few enough data points it might even be a good fit...MSimon wrote:In other words on the relevant time scales a linear approximation works as well.kcdodd wrote:My point is that pre ~1945 the CO2 was going as t^2 (constant acceleration), and since then its going as t^9 (t^7 acceleration).
From about 1936 to 1949 CO2 levels were stable, which Skipjack links to coal fired steam engines being replaced with more efficient engines powered by oil and gasoline.
Other than that, the increasing acceleration in the rest of the graph and the similarity between the graphs of CO2 level and log CO2 level is what you would expect from exponential growth.
Talk to IntLibber...MSimon wrote:Further: we have to go deeper into the measurements and reporting to find out just how deep the rot is. How do we know the CO2 data hasn't been selected or corrupted?
Ars artis est celare artem.
I found this paragraph rather interesting.
On the otherhand if you say global warning is a real possibility and a significant risk, then its up to someone else to prove it wrong.
It's a question of risk verses danger.
Having said that I understand why the climatologists have become so dogmatic, even a 20% of catastrophic climate change would be reason enough to reduce CO2 emissions and replace fossil fuels with other energy sources. I think the climatologists have acquired enough evidence to show there's a plausible possibility of catastrophic climate change being brought on by global warming and that the risk is large enough to justify a reduction in C02 emmission as a pre-cautionary meassure.
Problem is by the time climate science advances to the point where it can answer the question for sure and completely model all the positive and negative feedbacks fully, it could well be too late. And everytime they talk about risk, politicains refuse to budge an inch without proof so eventually the climate scientist change their language to sound more certain even if the science isn't, compromising their scientific integrity for the sake of bring about political action the might avoid disaster.
Considering the Earth climate is just about the most complex thing anyone has ever tried to model if make the statement "global warming is real" thn it is up to you to prove that its right.Science is necessarily conservative. Once something is established as being an accepted model/theory/law, then it becomes the standard paradigm until it is shown to be flawed in a significant way. You may not like it, but in modern climatology, global warming is accepted as the standard. It’s not up to me or anyone to prove it right at this point, it’s up to scientists to show it’s wrong. To do that you’ll need a lot of really good evidence, and from what I have seen and read that evidence is not there. Maybe it’s fair to say not yet there, but in reality it may not be there at all.
On the otherhand if you say global warning is a real possibility and a significant risk, then its up to someone else to prove it wrong.
It's a question of risk verses danger.
Having said that I understand why the climatologists have become so dogmatic, even a 20% of catastrophic climate change would be reason enough to reduce CO2 emissions and replace fossil fuels with other energy sources. I think the climatologists have acquired enough evidence to show there's a plausible possibility of catastrophic climate change being brought on by global warming and that the risk is large enough to justify a reduction in C02 emmission as a pre-cautionary meassure.
Problem is by the time climate science advances to the point where it can answer the question for sure and completely model all the positive and negative feedbacks fully, it could well be too late. And everytime they talk about risk, politicains refuse to budge an inch without proof so eventually the climate scientist change their language to sound more certain even if the science isn't, compromising their scientific integrity for the sake of bring about political action the might avoid disaster.