A Climate Of Bad Code
-
- Posts: 815
- Joined: Thu Nov 13, 2008 4:03 pm
- Location: UK
The coking process is technically distillation rather than burning, and the water gas reaction, although it involves an oxidation of carbon, is not burning in the usual sense, producing little or no CO2.KitemanSA wrote:alexjrgreen, Your post sounds like it is trying to contadict what MSimon said, but in fact says what he said. Did I miss something?
Two thirds of the end products go on to produce CO2 when burnt but the other third do not, so this is not equivalent to burning coal.
Ars artis est celare artem.
Dude, you rudely snipped my quote. I said,MSimon wrote:Really? Taking tens of trillions out of the world economy based on this phony baloney won't kill anyone?If it fails, it can be fixed and re-run without anyone dying.
What about all the medicines that won't get developed? Life saving technology that will not be developed? Not to mention higher energy costs.
But that is the beauty of these schemes. The perpetrators fingerprints will not be on the gun.
What I meant was that an overflow error would be caught at run time, in most cases, and the code would be re-run after fixing the error.Although the results are critical, the actual real-time execution of this code is not. If it fails, it can be fixed and re-run without anyone dying.
Basically, I was just saying that overflow errors are not genrally handled in code unless necessary. Without context saying that it is necessary, just saying they don't handle overflows doesn't mean much.
If there is context that says that they should, it doesn't make what I said invalid.
Anyways...
-
- Posts: 892
- Joined: Thu Mar 12, 2009 3:51 pm
- Contact:
and most likely someone has a function or method or whatever it's called in that language in a library somewhere, and ten seconds to type a line of code can make sure it works right the first time. This sounds like something one of my computer science teachers would say isn't worth the bother excluding, just like a dozen other coding issues.
Now, if the code doesn't pre-exist, it might be worth it to try to save a few minutes of coding by excluding it, but you have to make sure your inputs are scrubbed. This is something that would happen often enough though, I'd imagine the code preexists, and so it's ten seconds to call the function, and save you time when you're looking for trouble later.
Someone indicated it could actually be a problem if the inputs aren't watched carefully. Any idea of whether it actually DID cause trouble?
Now, if the code doesn't pre-exist, it might be worth it to try to save a few minutes of coding by excluding it, but you have to make sure your inputs are scrubbed. This is something that would happen often enough though, I'd imagine the code preexists, and so it's ten seconds to call the function, and save you time when you're looking for trouble later.
Someone indicated it could actually be a problem if the inputs aren't watched carefully. Any idea of whether it actually DID cause trouble?
Evil is evil, no matter how small
Re: A Climate Of Bad Code
That depends on what you're calculating. If your model needs constant precision +/-X over the operating range of a variable, integers are the numeric type for the job. For a game physics engine I was working on for awhile I wanted the same position precision near the edge of the simulated space as near the center, and so went with integers.Luzr wrote:I doubt anybody would use integers to perform actual computations.
On the other hand, if you want a larger dynamic range and numeric uncertainty proportionate to the magnitude of the number is acceptable, floating point does the job. And with standard floating point methods overflow is well handled.
Re: A Climate Of Bad Code
That must have been a very special case - and frankly, I do not see the advantage, as long as you keep the exponent constant, precision wise float behaves more or less like integer.hanelyp wrote:That depends on what you're calculating. If your model needs constant precision +/-X over the operating range of a variable, integers are the numeric type for the job. For a game physics engine I was working on for awhile I wanted the same position precision near the edge of the simulated space as near the center, and so went with integers.Luzr wrote:I doubt anybody would use integers to perform actual computations.
AFAIK, modern game engines use floats predominately, physics inluded.
Depends.
Is your goal +/- 2 inches (for a length) or +/- 2%?
The problem with standard floats is that you only get a fixed number of bits. It can be a LOT. Depending on the calculation it may not be enough.
The digital math for extending digital arithmetic to any number of bits is not hard. OTOH going from an 80 bit (significator) float to a 160 bit float is tougher.
In any case using an integer where there is a chance of overflow without checking for overflow is a really bad idea.
Which is why open code is so important for this branch of science which is supposed to inform us enough to make a world wide bet in the hundreds of trillions of dollars.
It would be really bad to make this kind of bet based on a coding or arithmetic error.
Is your goal +/- 2 inches (for a length) or +/- 2%?
The problem with standard floats is that you only get a fixed number of bits. It can be a LOT. Depending on the calculation it may not be enough.
The digital math for extending digital arithmetic to any number of bits is not hard. OTOH going from an 80 bit (significator) float to a 160 bit float is tougher.
In any case using an integer where there is a chance of overflow without checking for overflow is a really bad idea.
Which is why open code is so important for this branch of science which is supposed to inform us enough to make a world wide bet in the hundreds of trillions of dollars.
It would be really bad to make this kind of bet based on a coding or arithmetic error.
Engineering is the art of making what you want from what you can get at a profit.
Ok, a distinction without difference. The point was that the coal derived fuel provided by the process was NOT carbon neutral so had no bearing on the CO2 dip during WWII. All the carbon wound up as CO2 eventually.alexjrgreen wrote:The coking process is technically distillation rather than burning, and the water gas reaction, although it involves an oxidation of carbon, is not burning in the usual sense, producing little or no CO2.KitemanSA wrote:alexjrgreen, Your post sounds like it is trying to contadict what MSimon said, but in fact says what he said. Did I miss something?
Two thirds of the end products go on to produce CO2 when burnt but the other third do not, so this is not equivalent to burning coal.
Color me skeptical. How much of coal's carbon ends up in the atmosphere? How much of the oils, waxes and alcohols end up burned? I'm going to hazard a guess this is at best a 10% difference in the end.Two thirds of the end products go on to produce CO2 when burnt but the other third do not, so this is not equivalent to burning coal.
Folks,
The point is that it doesn't matter if 1/3 of the carbon in a ton of coal goes into other products, it just means that 1.5 times as much coal needed to be put into the process to generate the same amount a fuel, and the same amount of CO2. This process in no way addresses the CO2 blip of the mid 40s.
The point is that it doesn't matter if 1/3 of the carbon in a ton of coal goes into other products, it just means that 1.5 times as much coal needed to be put into the process to generate the same amount a fuel, and the same amount of CO2. This process in no way addresses the CO2 blip of the mid 40s.
True. As I said in one of my previous posts, the real issue with floating point numbers is precision and rounding (but not overflows).MSimon wrote:Depends.
Is your goal +/- 2 inches (for a length) or +/- 2%?
The problem with standard floats is that you only get a fixed number of bits. It can be a LOT. Depending on the calculation it may not be enough.
Depends. In any case, it is very very costly in terms of runtime performance.The digital math for extending digital arithmetic to any number of bits is not hard.
Actually, once you have arbitrary precision integers, adding float support is not that hard - just add exponent and normalisation....OTOH going from an 80 bit (significator) float to a 160 bit float is tougher.
Actually, I would stop at "where there is a chance of overflow". I have never seen the code that would check for overflow after it happened. (OTOH, quite a lot of code intentionally uses overflow for algorithmic goals).In any case using an integer where there is a chance of overflow without checking for overflow is a really bad idea.
Heh, right.It would be really bad to make this kind of bet based on a coding or arithmetic error.
-
- Posts: 526
- Joined: Sun Aug 31, 2008 7:19 am
The likelihood of this occurring in FORTRAN by a competent programmer is infinitely small. We're talking about FORTRAN environments with huge data limits and with built in overflow checking that will warn the programmer if this ever happens. The rest of this guys post are not really worth addressing, though I would have him look at the various climate models who are open source:Let me explain. Computers represent numbers in binary. Any signed representation (ie one that handles plus and minus) will use some formatting trick to differentiate the two. The problem is, if a positive number gets incremented to be one bit too big... it may suddenly become a negative number. Regardless of what does happen, any calculation using the value after an overflow might as well be a random number generator. The results are totally, utterly worthless. There is not a chance in hell that the output will be meaningful.
GISS ModelE: http://www.giss.nasa.gov/tools/modelE/
(The older Hansen GISS model is here: http://edgcm.columbia.edu/ )
NCAR CCSM: http://www.ccsm.ucar.edu/
University Hamburg models: http://www.mi.uni-hamburg.de/Projekte.209.0.html?&L=3
NEMO: http://www.nemo-ocean.eu/
GFDL: http://www.gfdl.noaa.gov/fms
MITGCM: http://mitgcm.org/
Note the first two models (not the Hansen toy) were used in IPCC, the others are increasingly toy-like. In any case, there's a reason FORTRAN is still used, especially with analytical systems. The problems associated with other languages, such as overflow with C, are not something one has to worry about. It is a very simple language.
Science is what we have learned about how not to fool ourselves about the way the world is.
So true.The likelihood of this occurring in FORTRAN by a competent programmer is infinitely small.
Have you read the ClimateGate e-mails by the guy trying to bring the code up to an acceptable level?
The deal is the code was not done by people who knew or upheld professional standards.
My guess is that the person(s) writing the code thought they were dealing with numbers not understanding that it was actually bits. And all the ramifications of that.
Again. It was a rumor (public) from what I consider a fairly reliable source. We shall see what comes of it. If true I expect to see a data dump in a month or two. If true I hope it forces the release of the programs so they can be gone over with a fine tooth comb.
In addition - if true - it will underscore bad management and poor standards in the Climate Science community further eroding confidence from an already low base.
i.e. they failed to live up to the incredibly low standards they set for themselves.
My guess on why they don't want to release code and data - embarrassment and corruption in equal measure.
Engineering is the art of making what you want from what you can get at a profit.
I'm not familiar with FORTRAN. How does it handle integer overflow? i.e. adding two positive numbers gives a negative number.
I do a lot of work on small machines that don't have FPUs. The way I handle the problem is to make sure the accumulator can't overflow with any number in range (and I do either range checking or limiting to make sure, unless the numbers provided are guaranteed in range).
I do a lot of work on small machines that don't have FPUs. The way I handle the problem is to make sure the accumulator can't overflow with any number in range (and I do either range checking or limiting to make sure, unless the numbers provided are guaranteed in range).
Engineering is the art of making what you want from what you can get at a profit.