What's the big (64-bit) deal, anyway?

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

dnavas
Posts: 84
Joined: Sun Nov 11, 2007 3:59 am

Post by dnavas »

MSimon wrote:It might be useful to do multiplies on the accelerator and adds/subtracts on the main processor.
Unless they are MADDs, as those tend to run in a single cycle on GPUs....
64bit isn't that far away for GPUs, the issue won't be computational intensity, it'll be memory. 1G video ram is terribly rare. GDDR5 should have tons of bandwidth, but you won't want to do scatter/random memory access with it, and it isn't clear to me when it will actually make itself known in GPU-world.

[My understanding of NVidia's schedule as it exists now is that 64bits arrives first half 2008, while second half sees the release of some bigger guns.]

And then there's Larrabee....

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

And there is always FPGAs if we have the $$$ to go custom.

In fact if it looks like this kind of a machine is a go I can see purpose built silicon to do simulations. That should get the speed up to a useful level.

If Congress turns on the $$$$$$$$ I would do an FPGA machine first as proof of concept and then go to custom silicon for a 10X speedup.
Engineering is the art of making what you want from what you can get at a profit.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

AMD announced a GPU with 64-bit math in November, to be released Q1 2008. It, too, will ship with a C streams toolkit. The press release says it'll have 2 GB of onboard RAM and will run at 500 GFLOPS, a number that would equal the NVIDIA Tesla... that 64-bit processing is looking awfully nice.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

MSimon wrote:If Congress turns on the $$$$$$$$ I would do an FPGA machine first as proof of concept and then go to custom silicon for a 10X speedup.
I don't see why you would need to. If you have a decent budget, you can get all kinds of supercomputer hours.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

scareduck wrote:
MSimon wrote:If Congress turns on the $$$$$$$$ I would do an FPGA machine first as proof of concept and then go to custom silicon for a 10X speedup.
I don't see why you would need to. If you have a decent budget, you can get all kinds of supercomputer hours.
The problem is getting machine time allocation.
Engineering is the art of making what you want from what you can get at a profit.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

dnavas wrote:
MSimon wrote:It might be useful to do multiplies on the accelerator and adds/subtracts on the main processor.
Unless they are MADDs, as those tend to run in a single cycle on GPUs....
You mean Fused multiply-add?
MSimon wrote:If you decide to go the FPGA route let me know. I have some experience along those lines.
I would think building an FPGA to do double- or quad-precision floating point would be a rather specialized, demanding task. Are there libraries to do the basics (MDAS, say) available?

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

scareduck wrote:
dnavas wrote:
MSimon wrote:It might be useful to do multiplies on the accelerator and adds/subtracts on the main processor.
Unless they are MADDs, as those tend to run in a single cycle on GPUs....
You mean Fused multiply-add?
MSimon wrote:If you decide to go the FPGA route let me know. I have some experience along those lines.
I would think building an FPGA to do double- or quad-precision floating point would be a rather specialized, demanding task. Are there libraries to do the basics (MDAS, say) available?
The only special stuff is the arithmetic alu. The rest is just registers, memory, stack and external ram interface.
Engineering is the art of making what you want from what you can get at a profit.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

I just heard back from AMD .. the Firestream 9170 ship date has slipped to April.

jmc
Posts: 427
Joined: Fri Aug 31, 2007 9:16 am
Location: Ireland

Re: What's the big (64-bit) deal, anyway?

Post by jmc »

scareduck wrote:
The device is almost electrically neutral. The departure from neutrality to create a 100 KV well is only one part in a million, when you have a density of 10^12 cm^3. The departure from neutrality is so small that we found current computer codes and computers available to us were incapable of analyzing it because of the numeric noise in the calculations by a factor of a thousand.
You don't need to depart from neutrality at all to create a 100kV well, all you have to do is displace a negative charge inward with respect to a positive charge (i.e. a spherical capacitor)

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Re: What's the big (64-bit) deal, anyway?

Post by MSimon »

jmc wrote:
scareduck wrote:
The device is almost electrically neutral. The departure from neutrality to create a 100 KV well is only one part in a million, when you have a density of 10^12 cm^3. The departure from neutrality is so small that we found current computer codes and computers available to us were incapable of analyzing it because of the numeric noise in the calculations by a factor of a thousand.
You don't need to depart from neutrality at all to create a 100kV well, all you have to do is displace a negative charge inward with respect to a positive charge (i.e. a spherical capacitor)
A local departure from neutrality will require precision similar to a global departure.
Engineering is the art of making what you want from what you can get at a profit.

93143
Posts: 1142
Joined: Fri Oct 19, 2007 7:51 pm

Post by 93143 »

Would it be possible to model the distributions assuming equal charge densities, and then model (and scale) the departure from neutrality as a separate variable?

I guess it would be hard to apply in the areas where the near-neutrality assumption breaks down (like near the electron guns)...

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

93143 wrote:Would it be possible to model the distributions assuming equal charge densities, and then model (and scale) the departure from neutrality as a separate variable?
Yes, this is called "perturbation theory". The equation of state relating pressure to number density is the tricky part.
I guess it would be hard to apply in the areas where the near-neutrality assumption breaks down (like near the electron guns)...
You can still do that, you just isolate the space zones into different problems. This is how people solve the plasma sheath near cathode and anode arcs. Each region near the source or sink is modeled separately from the bulk plasma, and you match density and potential and current flow at the region boundaries. Usually you also have to match derivatives as well so the blending is realistic.

spirilis
Posts: 1
Joined: Wed Oct 17, 2007 6:50 pm

Post by spirilis »

Anyone look at Amazon EC2?
http://www.amazon.com/EC2-AWS-Service-P ... 942TSJ2AJA

They have virtual machines "for rent", where you pay by-the-hour for runtime. The highest class includes 15GB RAM with a 64-bit CPU platform. The S3 storage service gives you a central dump spot for putting data (I/O from the internet to S3 costs money, I/O from the internet to EC2 instances costs money, but I/O from S3 to EC2 costs nothing)

Still need something to collect all the data, and in a precise-enough format, then enough bandwidth to upload everything... but it's an option. Compute-on-demand.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Nope, never saw that before. Thanks - I will be reading up on it!

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

Firestream 9250: Q3 2008 ship date, now.

Post Reply