What's the big (64-bit) deal, anyway?
Posted: Wed Jan 16, 2008 12:35 am
drmike has made some interesting pictures in the "Virtual Polywell" thread:
viewtopic.php?t=203&start=0
This got me thinking about something that Bussard mentioned in his video; it's on page 7 of the PDF transcript:
http://en.wikipedia.org/wiki/IEEE_float ... t_standard
with general adoption happening fairly rapidly thereafter. Assuming he was referring to the double (64-bit) type, that means his dynamic range was around 4.4E12 (order of a thousand = 10^3, 52 bits of significance = 2^52 =~ 4.5E15, so dividing the two out you end up with 4.4E12).
It seems to me there are three main problems with doing a credible simulation of a polywell device:
1) Number of particles involved, assuming you were doing particle-by-particle simulation.
2) Numeric underflow because of the dynamic range. This was essentially the problem Dr. B. was complaining of.
3) Memory size. Even if you use volume estimates or some other proxy (please be gentle -- it's been years since I took numerical analysis, and years further back since I took physics), it seems like getting good values would require a ton of memory. Problem (2) is compounded by problem (1).
Some of these problems are solveable by recent -- very recent -- developments in the hardware world.
First, from a cursory review of what gcc is doing with its command line switches and Intel, it seems that 96-bit floating point numbers (in C parlance, long doubles) are standard with Pentium-class machines; with the newer 64-bit machines, 128-bit floats are available.
Second, the x86 architecture has grown to a 64-bit architecture. This means if you can get a big enough box, you ought to be fine. In my experience, the real problem is finding a machine that can support enough DIMM slots to get you to however much memory you need, understanding that the denser the DIMM, the more you'll pay.
Third, multi-core machines are coming down in price and becoming de regeur for certain applications. (In my day job we never specify anything with less than two dual-core CPUs.)
Assuming you were spec'ing out a machine to do this kind of analysis, how big would you want it to be? How many CPUs, how much RAM?
viewtopic.php?t=203&start=0
This got me thinking about something that Bussard mentioned in his video; it's on page 7 of the PDF transcript:
When he was first looking into this, it would have been in 1992-1994; IEEE-754 was only finalized in 1985The device is almost electrically neutral. The departure from neutrality to create a 100 KV well is only one part in a million, when you have a density of 10^12 cm^3. The departure from neutrality is so small that we found current computer codes and computers available to us were incapable of analyzing it because of the numeric noise in the calculations by a factor of a thousand.
http://en.wikipedia.org/wiki/IEEE_float ... t_standard
with general adoption happening fairly rapidly thereafter. Assuming he was referring to the double (64-bit) type, that means his dynamic range was around 4.4E12 (order of a thousand = 10^3, 52 bits of significance = 2^52 =~ 4.5E15, so dividing the two out you end up with 4.4E12).
It seems to me there are three main problems with doing a credible simulation of a polywell device:
1) Number of particles involved, assuming you were doing particle-by-particle simulation.
2) Numeric underflow because of the dynamic range. This was essentially the problem Dr. B. was complaining of.
3) Memory size. Even if you use volume estimates or some other proxy (please be gentle -- it's been years since I took numerical analysis, and years further back since I took physics), it seems like getting good values would require a ton of memory. Problem (2) is compounded by problem (1).
Some of these problems are solveable by recent -- very recent -- developments in the hardware world.
First, from a cursory review of what gcc is doing with its command line switches and Intel, it seems that 96-bit floating point numbers (in C parlance, long doubles) are standard with Pentium-class machines; with the newer 64-bit machines, 128-bit floats are available.
Second, the x86 architecture has grown to a 64-bit architecture. This means if you can get a big enough box, you ought to be fine. In my experience, the real problem is finding a machine that can support enough DIMM slots to get you to however much memory you need, understanding that the denser the DIMM, the more you'll pay.
Third, multi-core machines are coming down in price and becoming de regeur for certain applications. (In my day job we never specify anything with less than two dual-core CPUs.)
Assuming you were spec'ing out a machine to do this kind of analysis, how big would you want it to be? How many CPUs, how much RAM?