Solar and GHG effect in vertical temperature of the atmos.
I don't accept homogenization as a valid method.
Confounding variables - micro climate. i.e. the temperature may be going down at site X while going up at Y and Z and it may be natural.
Why "correct" X with Y and Z?
If you don't like X for micro climate issues drop it from the record. Explain your work.
Confounding variables - micro climate. i.e. the temperature may be going down at site X while going up at Y and Z and it may be natural.
Why "correct" X with Y and Z?
If you don't like X for micro climate issues drop it from the record. Explain your work.
Engineering is the art of making what you want from what you can get at a profit.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
As I said, if I were to reproduce the homogenization methods, you would not believe they were "worthy."MSimon wrote:I don't accept homogenization as a valid method.
But they fit the satellite curve.
And there is no, as if claimed by D'Aleo, "cherrypicking" of warm stations, as that link I gave you showed.
Science is what we have learned about how not to fool ourselves about the way the world is.
But maybe the satellite stuff has an error (I do admit I trust it more - but you know - I'm a sceptic). How will we find that without doing a correct verification of the ground record? And homogenization is not the correct way.Josh Cryer wrote:As I said, if I were to reproduce the homogenization methods, you would not believe they were "worthy."MSimon wrote:I don't accept homogenization as a valid method.
But they fit the satellite curve.
And there is no, as if claimed by D'Aleo, "cherrypicking" of warm stations, as that link I gave you showed.
You see. In science it correct method is as important as correct data.
What you don't seem to get Josh is that I'm a stickler for getting the details right. I'm an engineer. Sloppy work will not cut it for me.
You are aware of the charge of the electron work where the original experimenter got it wrong. Further experiments deviated more and more until they got the right number. Why? Well the first guy was a science god. How could he get it wrong?
So getting the same results as the satellite proves nothing.
Engineering is the art of making what you want from what you can get at a profit.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
I find that really hard to believe. Homogenization *is* being a stickler for details, looking deeply at station records to figure out exactly what it is that happened to the history of the station, and adjusting for any biases that may have occurred by historical changes to the stations. The scientists actually got lucky that the meteorologists actually took the time to say "well, we changed the time of temperature to this period," or "we moved the station from next to a hot tar covered parking lot to a field 100 meters away."MSimon wrote:What you don't seem to get Josh is that I'm a stickler for getting the details right. I'm an engineer. Sloppy work will not cut it for me.
Indeed, when stations have big anomalies, they are thrown out, *because* the details are missing and must be understood before they can be included.
Science is what we have learned about how not to fool ourselves about the way the world is.
Homogenization smears the data.Josh Cryer wrote:I find that really hard to believe. Homogenization *is* being a stickler for details, looking deeply at station records to figure out exactly what it is that happened to the history of the station, and adjusting for any biases that may have occurred by historical changes to the stations. The scientists actually got lucky that the meteorologists actually took the time to say "well, we changed the time of temperature to this period," or "we moved the station from next to a hot tar covered parking lot to a field 100 meters away."MSimon wrote:What you don't seem to get Josh is that I'm a stickler for getting the details right. I'm an engineer. Sloppy work will not cut it for me.
Indeed, when stations have big anomalies, they are thrown out, *because* the details are missing and must be understood before they can be included.
And fixing things with statistics is not as good as fixing them with measurement.
And throwing out big anomalies may be wrong. The anomalies may be correct. The only way to tell is measurement. Maybe the outlier is information not error. It happens. And discoveries are made.
Details. Are. Important.
I see this all over climate science. Sloppy work is accepted.
Engineering is the art of making what you want from what you can get at a profit.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
An analysis of the data says this is absolutely not true.MSimon wrote:Homogenization smears the data.
Then you will have to wait for CLARREO. Obama is going to open up new drilling territories. Did you hear the STOU? No real environmental stuff in there. Just a jab at denialists (I thought it was amusing but it led to simply foot noting renewable energy).And fixing things with statistics is not as good as fixing them with measurement.
If it cannot be explained then it should be included? And you say you are an engineer with a stickler for the details? Seriously?And throwing out big anomalies may be wrong. The anomalies may be correct.
Maybe the outlier is, but the data itself, just the raw measurement, cannot explain the outlier. You need external information, like a station move, or a heat wave, or a nuclear explosion, or a nearby liquid nitrogen tank leaking.The only way to tell is measurement. Maybe the outlier is information not error. It happens. And discoveries are made.
You say details are important but the sentence right before that, you say that they should include an outlier which has less information?Details. Are. Important.
Heh, sloppy work. You're the one advocating sloppy work.I see this all over climate science. Sloppy work is accepted.
Science is what we have learned about how not to fool ourselves about the way the world is.
Dude. If you take good sites - class 1 - and use class 2, 3, 4, and 5 sites to homogenize them you have ruined your data.
http://wattsupwiththat.com/2010/01/27/r ... aggerated/

http://wattsupwiththat.com/2010/01/27/r ... aggerated/
Political issues aside, the appearance of the Menne et al 2010 paper does not stop the surfacestations project nor the work I’m doing with the Pielke research group to produce a peer reviewed paper of our own. It does illustrate though that some people have been in a rush to get results. Texas state Climatologist John Neilsen-Gammon suggested way back at 33% of the network surveyed that we had a statistically large enough sample to produce an analysis. I begged to differ then, at 43%, and yes even at 70% when I wrote my booklet “Is the US Surface Temperature Record Reliable?, which contained no temperature analysis, only a census of stations by rating.
The problem is known as the “low hanging fruit problem”. You see this project was done on an ad hoc basis, with no specific roadmap on which stations to acquire. This was necessitated by the social networking (blogging) Dr. Pielke and I employed early in the project to get volunteers. What we ended up getting was a lumpy and poorly spatially distributed dataset because early volunteers would get the stations closest to them, often near or within cities.
The urban stations were well represented in the early dataset, but the rural ones, where we believed the best siting existed, were poorly represented. So naturally, any sort of study early on even with a “significant sample size” would be biased towards urban stations. We also had a distribution problem within CONUS, with much of the great plains and upper midwest not being well represented.
This is why I’ve been continuing to collect what some might consider an unusually large sample size, now at 87%. We’ve learned that there are so few well sited stations, the ones that meet the CRN1/CRN2 criteria (or NOAA’s 100 foot rule for COOPS) are just 10% of the whole network. See our current census:

The numbers will show what they show. Warming, cooling, no change. But the measurements must be done right. And everything else that affects the numbers.When you have such a small percentage of well sited stations, it is obviously important to get a large sample size, which is exactly what I’ve done. Preliminary temperature analysis done by the Pielke group of the the data at 87% surveyed looks quite a bit different now than when at 43%.
It has been said by NCDC in Menne et al “On the reliability of the U.S. surface temperature record” (in press) and in the June 2009 “Talking Points: related to “Is the U.S. Surface Temperature Record Reliable?” that station siting errors do not matter. However, I believe the way NCDC conducted the analysis gives a false impression because of the homogenization process used. As many readers know, the FILNET algorithm blends a lot of the data together to infill missing data. This means temperature data from both well sited and poorly sited stations gets combined to infill missing data. The theory is that it all averages out, but when you see that 90% of the USHCN network doesn’t meet even the old NOAA 100 foot rule for COOPS, you realize this may not be the case.
Engineering is the art of making what you want from what you can get at a profit.
From the above link:
And I homogenize them. What exactly have I got in terms of a measurement?
The finest quality climate measuring system money can buy.
Say I'm measuring the length of an object. I use an interferometer,a steel tape rule, a plastic dime store ruler, a wooden ruler of high quality, and a really cheap yardstick from the lumber yard.In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.
And I homogenize them. What exactly have I got in terms of a measurement?
The finest quality climate measuring system money can buy.
Engineering is the art of making what you want from what you can get at a profit.
A thousand times, yes
If it cannot be explained then it should be included?
When talking about anomalies in measured data, I would answer yes a thousand times over. You don't improve the data quality by throwing out the anomalies without any other explanation. You correct it with statistics, by including the anomaly in the average and adjusting the error bars accordingly. MSimon is dead right that without some other explanation we don't know what local anomalies might be and what is valid or invalid data and should not just be throwing data based on fit alone.
When talking about anomalies in measured data, I would answer yes a thousand times over. You don't improve the data quality by throwing out the anomalies without any other explanation. You correct it with statistics, by including the anomaly in the average and adjusting the error bars accordingly. MSimon is dead right that without some other explanation we don't know what local anomalies might be and what is valid or invalid data and should not just be throwing data based on fit alone.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
MSimon, that is an outright lie, and you aren't wise enough to detect that it is a lie. The rural stations homogenize the urban ones, not the other way around. Good stations undergo only TOBs (time of observation bias) homogenization by and large. And even then those adjustments are extremely small.
bcglorf, science is empirical, you cannot include something that has no explanation. Anomalies must be accounted for. USHCN does have spatial recognization methods to detect anomalies that are not accounted for, and it has allowed more stations to be included. I don't have enough knowledge of the math involved to say whether or not the spatial recognization techniques are sound, though.
bcglorf, science is empirical, you cannot include something that has no explanation. Anomalies must be accounted for. USHCN does have spatial recognization methods to detect anomalies that are not accounted for, and it has allowed more stations to be included. I don't have enough knowledge of the math involved to say whether or not the spatial recognization techniques are sound, though.
Science is what we have learned about how not to fool ourselves about the way the world is.
Airport stations on runways are rural? Well OK then.Josh Cryer wrote:MSimon, that is an outright lie, and you aren't wise enough to detect that it is a lie. The rural stations homogenize the urban ones, not the other way around. Good stations undergo only TOBs (time of observation bias) homogenization by and large. And even then those adjustments are extremely small.
bcglorf, science is empirical, you cannot include something that has no explanation. Anomalies must be accounted for. USHCN does have spatial recognization methods to detect anomalies that are not accounted for, and it has allowed more stations to be included. I don't have enough knowledge of the math involved to say whether or not the spatial recognization techniques are sound, though.
====
Anomalies made Einstein famous. to keep them out of the record because they are unexplained reduces the chances some one will look for an explanation.
====
Watts is working on a paper to explain what is wrong with the network. He comes to the conclusion that only 2% of the stations used for the record are really class one. Let me see 2% of 1220 = 24.
And the US network is the best in the world.
====
I've been reading a lot here and there about the FILENET program. Perhaps you can tell me how it works.
Engineering is the art of making what you want from what you can get at a profit.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am