Virtual Polywell

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Here's a picture of the brute force calculation on the potential on the MaGrid. It's not a great picture, but it tells me what I want to know - the potential is high near the grid and it falls off every where away from it out to the outer shell.

Image

To compute the potential and electric field from an electron distribution, I first need the potential everywhere in the polywell from the grid. While I still haven't added a full plasma, the first step of computing the potential from an electron fluid is really challenging. I'd expect the fluid to explode because there would be no positive charge in the center to hold it. So it will be an interesting experiment in modeling.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

I wonder if you could do a rotating charged sphere and see if the magnetic field wiffles. And especially what it does around the cusps.
Engineering is the art of making what you want from what you can get at a profit.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I'm still trying to do initial conditions!!

Interesting idea though, once I figure out how to do the simple stuff it'd be fun to try.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Here is a description of what I mean by "trying to compute initial conditions". It is a really "simple" model of an electron distribution in a Polywell. I assume Maxwellian electrons - which may not be the case. But it's a place to start and get some clues about just how complicated the problems of modeling will be.

All you computer and math geeks should feel free to give me pointers on how to try to solve that last integral. I have tables of the MaGrid potential already (see previous plot above), but this seems like a hard problem no matter how I slice it. Fun though!!

Edit: I can tell I'm in a hurry - there are a lot of things I can say more clearly. For example - the last equation is derived by first integrating over all velocity. But I never say that, I just did it. I will fix things and upload new versions when I get a chance - all comments on what to fix welcome.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I just uploaded a better version, same place:
http://www.eskimo.com/~eresrch/Fusion/e ... bution.pdf

Time to crunch numbers.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

This might be useful:

http://www.iop.org/EJ/article/-alert=34 ... 025002.pdf

Suppression of the filamentation instability by a flow-aligned magnetic field: testing the analytic threshold with PIC simulations

===

http://www.iop.org/EJ/article/-alert=34 ... 025005.pdf

Stabilization of large-scale fluctuations in noise-driven plasmas by magnetic field
Engineering is the art of making what you want from what you can get at a profit.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I saved copies - will check it out when I get a chance!

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I started my code run for the 3D integral. It took me a while to take care of all the special cases. The code is posted here. Given it will take about 22 hours (+/- 5 hours given the crude estimate) it is clearly not very efficient. I also don't need quite so much geometric accuracy - reducing the number of radial steps from 400 to 100 will change the run time by a factor of 10 at least.

The kicker is that I need to do this several times to figure out the stable potential distribution at each time step. So while I'm getting a nice conceptual understanding of the problem - there's lots of room for improvement on the practical side of things! Comments on how to improve things greatly appreciated.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

drmike wrote:Comments on how to improve things greatly appreciated.
... especially ones that don't involve spending money. :-)

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

I'm working on a faster version of electron_fluid.c. However, I'm running into an error:

Code: Select all

$ ./ef
Density integral result = 0.274490
absolute error = 0.000000
opening data files and reading them in.
i = 1
gsl: qag.c:261: ERROR: could not integrate function
Default GSL error handler invoked.
Aborted
So in the mean time until I figure that out, I've parallelized potential.c as a starting point. Tell it how many CPUs to use, and it gets a fairly linear increase in speed from as many CPUs as you've got.

I've taken the quick 'n dirty approach first. I changed line 270: for( i=0; i<MAXSTEPS+1; i++) to take i from the command line, then write a "slice" of potential.dat. This means I can throw make at it, like this:

Code: Select all

$ time make
gcc -o gen_potential100_dat -D MAXSTEPS=100 potential.c -lm -lgsl
0 of 100
1 of 100
...
99 of 100
100 of 100

real    0m30.158s
user    0m28.910s
sys     0m0.648s

Code: Select all

$ time make -j2
gcc -o gen_potential100_dat -D MAXSTEPS=100 potential.c -lm -lgsl
0 of 100
6 of 100
1 of 100
7 of 100
...
98 of 100
99 of 100
100 of 100

real    0m18.278s
user    0m29.358s
sys     0m0.692s
With MAXSTEPS=400 I get a similar improvement.

Code: Select all

$ time make
...
real    28m40.125s
user    28m11.022s
sys     0m7.932s
$ time make -j2
...
real    14m20.629s
user    28m14.206s
sys     0m8.437s
I've verified that the output files are an exact match of the original potential.c. Use -j2 to use 2 CPUs, -j4 for 4 CPUs, etc. Download the files here: http://polywell.nfshost.com/ef_par_v01.zip. Note: I moved MAXSTEPS to the Makefile. It's set to 400, but if you want to reduce it, be sure things are recompiled after editing the Makefile.

I still want to do a pthread version of potential.c - so that it's completely self-contained without an involved Makefile. I'd rather spend my time making electron_fluid.c faster, though.

Edit: added MAXSTEPS=400 times. There is a limit in the Makefile of 16 processors, but it's easily expandable.
Last edited by dch24 on Mon Jan 28, 2008 8:03 am, edited 1 time in total.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

Have you run this through any kind of performance analyzer to see where the code is spending its time, drmike?

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

You are right - Free is good!!

Nope, I do not have multi-core hardware (yet), so the speedups from
going parallel are not available. But I sure like what I see there dch24!!

I think reducing MAXSTEPS (or MAXDIM in some code - I'm not too consistent) down to 100 makes a lot of sense. Especially if you can get system time down to fractions of a second per run. That's impressive!

And no - I don't have any profiling tools. I'm thinking the allocation of work space is a waste of time - I should make that global and just do it once. It would have to be once per processor for multi-core. But that's pure speculation!

The electron_fluid should parallelize well. It is a 3D integral, so each region of space can be sent to specific processors. The way I broke it up in code is that the bottom level integrates a ring, next level turns that into a sphere, and the last integral is radial. So you could break up the radial into zones each of which have about equal volume for each processor. The outer shells would be thin compared to the inner ones, but the amount of work would be about equal.

It will be fun to see how far the first try gets. Improvements of 100 to 1000 will look darn good!

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

I wonder if we could pool some donations toward getting drmike a quad-core machine. OK, I know we're not going to be able to out-compete Santa Fe, but I'd be willing to put up $200.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

dch24:
That's really great - but let's wait and see what other people come up with.
It seems like there are lots of optimizations and better approximations that will get us closer to good answers without spending any money at all.

Plus, it seems like you have access to some nice machines. If we can get code that only takes an hour of cpu time while you sleep (or the equivalent on some "nice" setting of low priority in background) we can get some good answers or estimates to specific assumptions.

Besides - if Nebel et. al. prove that it might work and it's worth funding, we'll see the big boys get lots of cycles for hacking on this.

On top of that, the longer we can hold out, the better hardware we can pick from later! That octa-core apple looks mighty tasty - in 3 months it might have 2 or 4 times as much ram for the same price.

I think we can learn a lot with what we have in hand - when it comes time for real engineering and I can build one in my basement the extra help will be exceptionally welcome!

JohnP
Posts: 296
Joined: Mon Jul 09, 2007 3:29 am
Location: Chicago

Post by JohnP »

I've seen some discussion about multi core machines and thought I'd ask if everyone's sure that the problem is or can be reasonably parallelized. I did some work a couple years ago on a Beowulf type system that turned out to be a complete dog. Beowulf is not a shared memory model like multicore but even multicore is not suitable for all problems. Please excuse this comment if it's obvious to you - I haven't seen your code.

Post Reply