thread for segments files and parameters for simulation runs

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

KitemanSA wrote:HAPPYJACK27:
Simple question. Do your simulated moving charged particles create a simulated magnetic field? Yes or no?
yes. i use the biot-savart law for a charged particle, except when calculating the magnetic field between two moving charged particles, i use their velocitiy _relative to each other_ instead of absolute velocities. this way it's as if one of them is fixed and the other is moving, so the formula for a particle moving through a fixed field applies. using relative velocity here was my own reasoning. i don't know if it's in any "textbook", but _not_ doing it this way just seems wrong to me.

rjaypeters
Posts: 869
Joined: Fri Aug 20, 2010 2:04 pm
Location: Summerville SC, USA

Post by rjaypeters »

:D If I get to pick the part, I'd give a minor body part to work for Dr. Nebel & crew. happyjack27 has done the heavy lifting and deserves the credit.
"Aqaba! By Land!" T. E. Lawrence

R. Peters

Jeff Mauldin
Posts: 17
Joined: Thu Feb 21, 2008 8:41 pm

Propagation time for electric and magnetic fields?

Post by Jeff Mauldin »

I have been thinking about this type of simulation, and I am mentally stuck on one thing:

Are you accounting for the propagation of the magentic fields generated by the moving charges (and the electric fields associated with the moving charges) or are you assuming the propagation speed to be infinite? I was thinking about doing a simulation myself, but I'm a bit stuck on the complexity introduced by the propagation of these fields. Calculating the magetic field created by a moving charge in a 'static' e-field and b-field (i.e. the static created by the grid) seems fairly straightforward, and calculating how an individual particle would move based on the e-field and b-field at the location of the particle also seems fairly straightforward. At least those seem straightforward compared to calculating the cumlative e-field and b-field created both by the grid and by N particles, with the individual contribution to the cumlative field by each particle propagating away from a particle's moving position. I think the stuff I looked at of Indrek's involved things like a single electron in the static field, so the contribution of the single electron to the overall field (and the propigation of that contribution) could be safely neglected.

This complexity goes away if you just assume that the propagation of the e-field and b-field from the particle is instantaneous, but that seems a dangerous assumption. I don't have any great suggestion (other than ignoring the propagation issue and seeing what you get), but I was curious if you were already doing something about that.

icarus
Posts: 819
Joined: Mon Jul 07, 2008 12:48 am

Post by icarus »

I respect that happyjack is prepared to give it go but to be frank, I don't think anybody really knows exactly what happyjack is doing besides happyjack. Just warning people that it looks like he is playing pretty fast and loose with some of those assumptions going into that numerical gumbo. Take it for what it is worth, fair warning.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Propagation time for electric and magnetic fields?

Post by happyjack27 »

Jeff Mauldin wrote:I have been thinking about this type of simulation, and I am mentally stuck on one thing:

Are you accounting for the propagation of the magentic fields generated by the moving charges (and the electric fields associated with the moving charges) or are you assuming the propagation speed to be infinite? I was thinking about doing a simulation myself, but I'm a bit stuck on the complexity introduced by the propagation of these fields. Calculating the magetic field created by a moving charge in a 'static' e-field and b-field (i.e. the static created by the grid) seems fairly straightforward, and calculating how an individual particle would move based on the e-field and b-field at the location of the particle also seems fairly straightforward. At least those seem straightforward compared to calculating the cumlative e-field and b-field created both by the grid and by N particles, with the individual contribution to the cumlative field by each particle propagating away from a particle's moving position. I think the stuff I looked at of Indrek's involved things like a single electron in the static field, so the contribution of the single electron to the overall field (and the propigation of that contribution) could be safely neglected.

This complexity goes away if you just assume that the propagation of the e-field and b-field from the particle is instantaneous, but that seems a dangerous assumption. I don't have any great suggestion (other than ignoring the propagation issue and seeing what you get), but I was curious if you were already doing something about that.
i'm assuming it's instantaneous. though with 3 "adjustments":

a) i'm using the _relativistic_ version of the biot-savart law for a point charge at constant velocity as stated by wikipedia.

b) i'm using _relative_ particle velocity instead of absolute velocity. by using relative velocity between particles instead of absolute velocity (which is just relative velocity to "the observer" anyways), i'm simulating a "moving" magnetic field by adding the velocity that the magnetic field is moving relative to the particle to the particle's velocity and then just treating it as a particle moving that much faster/slower through a fixed magnetic field. i.e. i'm using the particle that's the source of the field as the "inertial reference frame".

c) i'm accumulating the forces, then adding them to velocity relativisticly by converting to "proper inertia" ("rapidity"), adding acceleration, then converting back via a lorentz transfrom. so given sufficiently small time step, the model is correct event at relativistic velocity. apart from the em fields propagating faster than light, that is.

i don't think assuming instantaneous transfer of force is an inaccurate model at this scale. the macro em fields really don't change very fast, especially when this nears equilibrium, in which case they're pretty much static, even when you include the plasma currents. and they certainly don't change at relativistic velocities, except very close to particles that are themselves travelling at relativistic velocities. and then in that case, the contribution is less proportional to the charge of the particle, which is of course very small. besides which i doubt any particle in the sim is traveling that fast in the first place. and like i said, on the macro level these are just ephemeral disturbances in a much slower changing em-field.
Last edited by happyjack27 on Tue Dec 07, 2010 8:14 pm, edited 2 times in total.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

icarus wrote:I respect that happyjack is prepared to give it go but to be frank, I don't think anybody really knows exactly what happyjack is doing besides happyjack.
well yes, but my code is open source and publicly available via svn here. i'd love to have more eyes on it (and hands!) and i'm happy to answer any questions.

quixote
Posts: 130
Joined: Fri Feb 05, 2010 8:44 pm

Post by quixote »

happyjack27 wrote: well yes, but my code is open source and publicly available via svn here. i'd love to have more eyes on it (and hands!) and i'm happy to answer any questions.
I haven't said anything because I haven't done anything useful with it yet, but I am examining it. I plan to take the cuda code and convert it to opencl so that everyone can play along. First, however, I need to downgrade the sdk version from 3.2 to 3.0 since nvidia has removed cuda-emulation from the 3.2 sdk and I don't have an nvidia card in the computer I'm working on.

I've already done the sdk downgrade and the application runs in emulation mode, but for some reason I can only see the grid and the sliders, not the particles. Also, some of the sliders are wonky.

If I have anything more useful to report, I'll let you know.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

Jeff Mauldin wrote:I have been thinking about this type of simulation, and I am mentally stuck on one thing:
the question by the way reminds me of one of my favorite quotes:

"Everything must be made extremely simple, but to do this, one must first master complexity." - Butler Lampson (in the glory days of the Palo Alto Research Center)

for example, you need to fully consider the ramifications of propagating the em field at the speed of light vs. instantaneously in the given context in order to realize that you can simply remove that constraint!
Last edited by happyjack27 on Tue Dec 07, 2010 8:40 pm, edited 1 time in total.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

quixote wrote:
happyjack27 wrote: well yes, but my code is open source and publicly available via svn here. i'd love to have more eyes on it (and hands!) and i'm happy to answer any questions.
I haven't said anything because I haven't done anything useful with it yet, but I am examining it. I plan to take the cuda code and convert it to opencl so that everyone can play along. First, however, I need to downgrade the sdk version from 3.2 to 3.0 since nvidia has removed cuda-emulation from the 3.2 sdk and I don't have an nvidia card in the computer I'm working on.

I've already done the sdk downgrade and the application runs in emulation mode, but for some reason I can only see the grid and the sliders, not the particles. Also, some of the sliders are wonky.

If I have anything more useful to report, I'll let you know.
aye. when it's running, first press "c" to stop it from cycling through pre-defined parameters, then you'll have to adjust the sliders to get them into a reasonable range so taht the particles don't just all fly out. also make sure to set the particle filters, particularly z-bottom and z-width so that you don't filter out all of the particles. also be sure to zoom in (hold down ctrl and the mouse button) close enough to see the particles. (it's in the readme provided by nvidia.) then press "3" to reset the simulation.

quixote
Posts: 130
Joined: Fri Feb 05, 2010 8:44 pm

Post by quixote »

happyjack27 wrote: aye. when it's running, first press "c" to stop it from cycling through pre-defined parameters, then you'll have to adjust the sliders to get them into a reasonable range so taht the particles don't just all fly out. also make sure to set the particle filters, particularly z-bottom and z-width so that you don't filter out all of the particles. also be sure to zoom in (hold down ctrl and the mouse button) close enough to see the particles. (it's in the readme provided by nvidia.) then press "3" to reset the simulation.
Right. I copied the parameters as closely as I could read from http://www.youtube.com/watch?v=dDa81EFRHjU, but no particles appear. I probably fubared something, but I'll let it run for a while to see if anything happens. How do you start yours? nbody -n=65536?

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

quixote wrote:
happyjack27 wrote: aye. when it's running, first press "c" to stop it from cycling through pre-defined parameters, then you'll have to adjust the sliders to get them into a reasonable range so taht the particles don't just all fly out. also make sure to set the particle filters, particularly z-bottom and z-width so that you don't filter out all of the particles. also be sure to zoom in (hold down ctrl and the mouse button) close enough to see the particles. (it's in the readme provided by nvidia.) then press "3" to reset the simulation.
Right. I copied the parameters as closely as I could read from http://www.youtube.com/watch?v=dDa81EFRHjU, but no particles appear. I probably fubared something, but I'll let it run for a while to see if anything happens. How do you start yours? nbody -n=65536?
no parameters. i just run it straight vanilla from the IDE ("nbody"). and i think i might have the particle count hardcoded to twice the number of cores, anyways. oh, and i didn't carry over any of my mods to the cpu-only code (vs. the cuda code) so don't even try that. and i haven't bothered to make sure it works in double precision mode (which would run at about 1/8th the speed on anything but a Tesla and is totally unnecessary anyways, imho.)

what revision number do you have? that video is a 3m polywell. on a 15cm one the amp turns would be way high. you'd want to bring them down to about 10E5.292 (comes out to about 0.82 tesla). the "sample configuration file" is where the magrid and initial particle starting zones info is. but i see you already figured that out as the path to that file was hardcoded so you'd have to change that to get the magrid to show up.

also i've futzed with the time step between revisions so i don't know what it is with that one. doesn't hurt to start it off as low as possible.

also, i see that zmin and zmax are at 0 in that video. that tells me they're not working yet in that video. because if you actually set the there you wouldn't see any particles (you'd see all particles with z coordinate 0 <= z <= 0).

hope that helps.

quixote
Posts: 130
Joined: Fri Feb 05, 2010 8:44 pm

Post by quixote »

happyjack27 wrote:
quixote wrote:
happyjack27 wrote: aye. when it's running, first press "c" to stop it from cycling through pre-defined parameters, then you'll have to adjust the sliders to get them into a reasonable range so taht the particles don't just all fly out. also make sure to set the particle filters, particularly z-bottom and z-width so that you don't filter out all of the particles. also be sure to zoom in (hold down ctrl and the mouse button) close enough to see the particles. (it's in the readme provided by nvidia.) then press "3" to reset the simulation.
Right. I copied the parameters as closely as I could read from http://www.youtube.com/watch?v=dDa81EFRHjU, but no particles appear. I probably fubared something, but I'll let it run for a while to see if anything happens. How do you start yours? nbody -n=65536?
no parameters. i just run it straight vanilla from the IDE ("nbody"). and i think i might have the particle count hardcoded to twice the number of cores, anyways. oh, and i didn't carry over any of my mods to the cpu-only code (vs. the cuda code) so don't even try that. and i haven't bothered to make sure it works in double precision mode (which would run at about 1/8th the speed on anything but a Tesla and is totally unnecessary anyways, imho.)

what revision number do you have? that video is a 3m polywell. on a 15cm one the amp turns would be way high. you'd want to bring them down to about 10E5.292 (comes out to about 0.82 tesla). the "sample configuration file" is where the magrid and initial particle starting zones info is. but i see you already figured that out as the path to that file was hardcoded so you'd have to change that to get the magrid to show up.

also i've futzed with the time step between revisions so i don't know what it is with that one. doesn't hurt to start it off as low as possible.

also, i see that zmin and zmax are at 0 in that video. that tells me they're not working yet in that video. because if you actually set the there you wouldn't see any particles (you'd see all particles with z coordinate 0 <= z <= 0).

hope that helps.
I'm on the latest, revision 62. I don't want to waste your time remotely debugging at the moment. I made a lot of changes trying to get it to work in emulation mode with the 3.2 sdk, only to learn they completely removed it starting in 3.1. Additionaly, I had to strip out all cutilSafeCall references since I couldn't get them to work in emulation mode. Oh, I also repackaged the external dependencies to be contained within the project, and parameterized most of the build configurations.

I am going to start over from scratch now that I know how all the components fit together. Once that's done, I will investigate the code more so I can understand what it's doing and respond more intelligently.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

quixote wrote: I'm on the latest, revision 62. I don't want to waste your time remotely debugging at the moment. I made a lot of changes trying to get it to work in emulation mode with the 3.2 sdk, only to learn they completely removed it starting in 3.1. Additionaly, I had to strip out all cutilSafeCall references since I couldn't get them to work in emulation mode. Oh, I also repackaged the external dependencies to be contained within the project, and parameterized most of the build configurations.

I am going to start over from scratch now that I know how all the components fit together. Once that's done, I will investigate the code more so I can understand what it's doing and respond more intelligently.
wow. you've done quite a bit!

i'll commit my latest stable revision right now, then. so you can use that.

i was thinking about 64k particles and realized that would be SLOWWW. i use 14k and get 15fps, and it's an N^2 algorithm. then i recalled that i hard coded some numbers in the bodysystemcuda_impl.h file "setSoftening" function (which as i imagine you've figured i use as my catch all to set ALL the constant memory values.) which assume a particle count of 14336. (2x the core count for my gpu). namely, the ion and electron representation ratios, i believe. ("iclumping" and "eclumping", respectively). if that assumption is off then the representation ratio will be way off, which means the time step would have to be reduced (or increased) considerably to account for the higher (or lower) plasma pressure.

the new time step is logarithmic (base 10), and it starts out at a picosecond per second, assuming a 15fps frame rate. so if x is where the slider is, it's (10^x)/15 picoseconds per frame. everything else is SI base units, (coloumbs, amp turns, meters, etc.), and it if its log10 it says so on the slider.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

how components fit together, at least insofar as what i've modified from the original source:

most of the modified code is in bodysystem.cu, ofcourse. the per particle is in the integrateBodies function, which calls applystaticfields for the static fields, and then bodybodyinteraction for the per particle-particle pairs. at the end of the function you'll see it selects what variables to use for the display coordinates.

that's then done in render_particles.cpp, where you'll find i added code to display the magrid.

besides that, the slider functions are in nbody, and sending that info to cuda is in the aforementioned setsoftening function in bodysystemcuda_impl.h.

and perhaps the most obvious of all "read configuraton file.cpp" does precisely that. it's called from the setsoftening function, as that info is all stored directly in constant memory on the gpu.

and i think that about covers it.

(oh, and to get the phase space view to work, i had to do some modifications to a number of files to get the display coordinates, now separate from the particle position coordinates, through the rather indirect class hierarchy.)

quixote
Posts: 130
Joined: Fri Feb 05, 2010 8:44 pm

Post by quixote »

Thanks for the explanations. They'll prove useful when I get to do some actual coding (hopefully soon)!

I intend to rebuild my environment tonight, and will describe what changes I've made in a readme just so it's clear (to others and myself). Once complete and working, I am going to try to make it a bit more linux friendly so once we've got something more interesting going on (that is, all parts simulated ) I can give it a go on Amazon's new GPU instances.

Perhaps I should work in a branch so we don't knock heads while you're working on the simulation and I do this?

Post Reply