thread for segments files and parameters for simulation runs

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

quixote
Posts: 130
Joined: Fri Feb 05, 2010 8:44 pm

Post by quixote »

I saw this in the readme.txt while I was doing moving files around and couldn't resist passing it on. My tired eyes mistakenly saw Johan F. Prins and I almost ran for the hills lest I be compared unfavorably with a snake's anus for excessive curiosity.
readme.txt wrote: This sample code originally accompanied the GPU Gems 3 article
"Fast N-body Simulation with CUDA", by Lars Nyland, Mark Harris,
and Jan F. Prins. It has been enhanced since then.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

quixote wrote: Perhaps I should work in a branch so we don't knock heads while you're working on the simulation and I do this?
yeah, i just gave you write access to the svn. (assuming you use the same handle on sourceforge.) so go ahead and make a new folder or whatever.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

back to the image coils idea. i recalled some early all-electron runs and somethings mentioned. most notably:

http://www.youtube.com/watch?v=XALqpF25xOU

and:

http://www.youtube.com/watch?v=dm0En2EE ... re=related

in the second video you see a light sphere of electrons surrounded by a darker one. in the first video you clearly see the spinning electrons about the cusps. what if THESE are the "image magnets"? who says the image magnets are neccessarily coils? they're supposed to be the inverse of the magnetic field, right, to cancel it out such that in between is a sphere of zero flux. so the inverse electromagnets of this complex geometry might look very much like those sort of curved conic current spinners.

and then in that second video, that faint outer sphere of electrons might be the wiffleball formed by the inner sphere, which is in fact the image magnet.

icarus
Posts: 819
Joined: Mon Jul 07, 2008 12:48 am

Post by icarus »

happyjack:
what if THESE are the "image magnets"? who says the image magnets are neccessarily coils?
I think you are meaning to say the plasma currents that repeal the magnet field ... and yes, the image magnets arranged in the image of the MaGrid is just the first easiest approximation that will form a known spherical boundary of zero normal flux .... IF a wiffleball forms and repeals the magnetic field as hypothesised there are arbitrarily many current configurations that could produce that field ... only nature or a very, genuine simulation will show what will be the actual plasma currents configuration that produce the "wiffle ball" effect (if does in fact exist).

Carter also worked a solution of many small, discrete plasma currents rings lying on the surface of a wiffle 'bag' with the condition kinetic pressure matched magnetic pressure (beta=1).

My favourite pick for the plasma current configuration at present is a pattern of six whirling regions (ExB driven), 1 lying on each face of the wiffleball (bag) directly beneath an actual physical coil and concentric with them .... the boundaries between these regions align with the line cusps and are regions of complicated shearing flows (turbulent transport) also, due to the currents all having the same rotational sense, when viewed from above.

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

icarus wrote:.... the boundaries between these regions align with the line cusps and are regions of complicated shearing flows (turbulent transport) also, due to the currents all having the same rotational sense, when viewed from above.
..and that there will be contributions to the net field pattern from the ion distributons as well as the electron distributions - though i'm guessing lower net magnitude they will still measureably affect the ultimate shape of the (magnetic) inflexion surface (WB).

yes?

(though i'm thinking that will automatically be convolved in this sim)

KitemanSA
Posts: 6179
Joined: Sun Sep 28, 2008 3:05 pm
Location: OlyPen WA

Post by KitemanSA »

rcain wrote:..and that there will be contributions to the net field pattern from the ion distributons as well as the electron distributions - though i'm guessing lower net magnitude they will still measureably affect the ultimate shape of the (magnetic) inflexion surface (WB).

yes?
Not sure. The target on "this sim" moves so fast I'm not QUITE sure what it is, but since the talk is 14k particles, I'm not sure that there should BE any ions in it at all. IIRC, the number of electrons in a WB6 size unit was about 10E9 (10E6?) before any ions were introduced. That is why I would like the sim to include both the image magnet AND a simulated virtual cathode before adding ANY particles. At that point, it is not clear that any but the most upscattered fuel ions will even reach the wiffleball. The PRODUCT ions will definately reach it, but that flux would be comparatively TINY, no?

I'd really like to see a "running" unit, wiffle ball and virtual cathod in place, and electrons and ions of equal number whizzing around. Then again, given that the change in collision size with velocity does not seem to be modeled, I'm not sure it will tell us anything useful.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

rcain wrote: (though i'm thinking that will automatically be convolved in this sim)
not sure what you mean by "convolved". if you mean that you wouldnt be able to distingiush electron contribution to the mag field from ion contribution, then yes, at the moment that is the case. at the moment the code doesn't even show mag field strength. the closest it shows is "force", which is lorentz force + coloumb force. with some work a mag field view can be added in, and a separate one for ions vs electrons. showing the mag _field_, however, i.e. it's _direction_, would be a considerably more invovled task. that would be a vector field and right now there's no means of vizualing a vector field. it all can be done, of course. just that some things take considerably more coding and thus time.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

KitemanSA wrote:Not sure. The target on "this sim" moves so fast I'm not QUITE sure what it is, but since the talk is 14k particles, I'm not sure that there should BE any ions in it at all. IIRC, the number of electrons in a WB6 size unit was about 10E9 (10E6?) before any ions were introduced. That is why I would like the sim to include both the image magnet AND a simulated virtual cathode before adding ANY particles. At that point, it is not clear that any but the most upscattered fuel ions will even reach the wiffleball. The PRODUCT ions will definately reach it, but that flux would be comparatively TINY, no?

I'd really like to see a "running" unit, wiffle ball and virtual cathod in place, and electrons and ions of equal number whizzing around. Then again, given that the change in collision size with velocity does not seem to be modeled, I'm not sure it will tell us anything useful.
that's 14k point charges, that each represent many many ions or electrons respectively. each point charge "sees" all the other point charges as if each one was billions of particles with the same position and velocity, but sees itself as only one particle. This, ofcourse is an approximation mechanism, but it's the only way to do it and from my experiments it works pretty well (doesn't change the magnetohydrodynamics) as long as you don't go too high with it.

also, i do have nuclear cross section being caclulated from velocity. it's display mode #8. i haven't shown it because at the current parameters nothing's travelling fast enough so it's just completely flat. (though at the 3m 7 teslas scale there was some good cross section even on pb11.)

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

as i understand it, the ion population is approximately equal the electron population (you just select them in/out of view with the sim slider) - which is where we want to end up as the start of our steady state regime. (maintained marginally electron rich, net)

could magfields/Lorentz sufaces be neatly represented by simply swapping 'points' for coloured 'arrows' i wonder? else project it out as a new phase space?

good to hear the 'cross section' view is actually working - do you trust it? (looking forward to some 'what-happens-if' sessions with that, once you're happy with everyting else).

ps: by "convolved". - yes absolutely as you say - unable to distingiush electron contribution to the mag field from ion contribution. thanks.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

rcain wrote: good to hear the 'cross section' view is actually working - do you trust it?
why trust when you can verify:

Code: Select all

T total_inertia = m_kescale * sqrt(dot(velocity,velocity)) * mass[ndx];
...
T keV = total_inertia / (1000.0f * elementary_charge * kescale);
#ifdef DT_FUSION
T dttt = (1.076f - 0.038f*keV);
T cross_section = m_barnscale * (147.24f + 18072.0f/(dttt*dttt+1)) / ( keV*(exp(27.57f*rsqrt(keV))-1) );
#else
T cross_section = m_barnscale * 84000.0f / (keV*exp(126.3f*rsqrt(keV)));
#endif
velocity is a vector in meters per second. m_kescale is the KE visualization scaling parameter (10^(the "KE scale (log10)" slider value)). elementary charge is in coloumbs. m_barnscale is the barns visualization scaling paramater. "inertia" is really momentum, ofcourse.

aye, it used to be ke. umm... no, i don't trust it. but i will in a sec.

Code: Select all

T keV = (dot(velocity,velocity) * mass[ndx]) / (1000.0f * elementary_charge * 2.0f);
...
#ifdef DT_FUSION
T dttt = (1.076f - 0.038f*keV);
T cross_section = m_barnscale * (147.24f + 18072.0f/(dttt*dttt+1)) / ( keV*(exp(27.57f*rsqrt(keV))-1) );
#else
T cross_section = m_barnscale * 84000.0f / (keV*exp(126.3f*rsqrt(keV)));
#endif
k, trust it now. ;)

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

rcain wrote: could magfields/Lorentz sufaces be neatly represented by simply swapping 'points' for coloured 'arrows' i wonder? else project it out as a new phase space?
that would certainly be "neat", but it is much "simpler" to say than do. as for phase space projection, the easiest way would be separate x,y,and z components, but that probably wouldn't be very visually intuitive. so if i were to do that i'd lean towards the considerably more difficult vector approach. (though lines rather than arrows)

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

on the note of cross-section, i'd be much happier if i could get some density metrics to go a long with it, so i can show cross section * density squared, but as i explained elsewhere that's not as simple to get as the other metrics (phase space axis choices) i've added.

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

happyjack27 wrote:why trust when you can verify:... ...
blimey, i wasnt expecting that. unfortunately i dont think i'm the best qualified for that task. any other volunteers?

though just taking a wild stab, if that formula is based on Duane coefficients/approximation (http://home.earthlink.net/~jimlux/nuc/sigma.htm ), should it not read:

Code: Select all

...
... /(keV(exp(27.57f/rsqrt(keV))-1))
...
etc

rather than

Code: Select all

...
... /(keV*(exp(27.57f*rsqrt(keV))-1) )
...
or have i read it arse-about-tit? (apologies if so).

EDIT: sorry, my bad:

Code: Select all

 rsqrt = 1/sqrt 
, so no problem.

(i'll leave someone else to check the constants - for D(d,n) Deuterium Target, deuteron projectile, i presume).
Last edited by rcain on Thu Dec 09, 2010 6:39 pm, edited 1 time in total.

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

happyjack27 wrote:
rcain wrote: could magfields/Lorentz sufaces be neatly represented by simply swapping 'points' for coloured 'arrows' i wonder? else project it out as a new phase space?
that would certainly be "neat", but it is much "simpler" to say than do. as for phase space projection, the easiest way would be separate x,y,and z components, but that probably wouldn't be very visually intuitive. so if i were to do that i'd lean towards the considerably more difficult vector approach. (though lines rather than arrows)
fair enough, just thought since you had the Lorentz force calculated already, it might have been easier to render it. a consideration for later perhaps.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

rcain wrote: though just taking a wild stab, if that formula is based on Duane coefficients/approximation (http://home.earthlink.net/~jimlux/nuc/sigma.htm ), should it not read:

Code: Select all

...
... /(keV(exp(27.57f/rsqrt(keV))-1))
...
etc
let me see. the source for the formula is this, page 17.

ah, i see. "rsqrt" stands for reciprocal square root, not just sqrt. so it is correct in that it's ".../(keV(exp(27.57f/sqrt(keV))-1)) " which is the same thing as ".../(keV(exp(27.57f*rsqrt(keV))-1)) ".

at a hardware level, it turns out it actually takes less circuitry and is a little faster and more accurate to compute the reciprocal square root directly, rather than computing the square root and then dividing one by it. (due to some math mumbo-jumbo algorithm stuff that i don't understand, but i understand at least how floating-point division neccessarily requires much more circuit-area than multiplication, which is justification in itself to avoid it where possible.)

and these cards are outcroppings of what was originally a 3d graphics card, and in 3d graphics a very common operation is converting a vector to a unit vector, which means dividing each component by the square root of the dot product of the vector and itself. i.e. vr = 1.0/sqrt(dot(v,v)); v.x = v.x* vr; etc. so it turns out reciprocal square root was actually very common anyways. so they decided to implement rsqrt natively instead of sqrt, so now to get an un-recpricated square root you actually take the reciprocal of the reciprocal square root. which is pretty slow, ofcourse, so now you want to avoid this as much as possible. it turns out for most vector related math implementing rsqrt natively (natively = in hardware) generally results in a faster final product than implementing sqrt natively, in addition to using up less circuit area and thereby allowing more cores on a chip. so all in all it was a great move on nvidia's part. but i digress.

Post Reply