Page 1 of 2

A Polywell recession

Posted: Fri Jan 23, 2009 4:40 pm
by TallDave
Imagine it's 2012 and Dr. Nebel and his cohorts have achieved a working p-B11 reactor which is everything we ever hoped for. Licenses are issued, plants spring up all over the world like daisies, and power production is coming at 1/10th of current prices.

What happens next? Well, you might be surprised to learn we'd probably see a severe recession, at least in the power industry, because the massive deflationary pressure will not only drive other technologies out of business but also reduce the total dollars being spent on energy (you probably wouldn't use ten times as much energy as you do now no matter how cheap it was).

There's obviously some fanciful aspects here (that's a very optimistic cost structure, and build-out times would be long and regulated) but this is more or less what happened in the telco industry when price-per-bit-per-mile fell dramatically due to new technologies.

I say this not to discourage anyone from Polywell, but to make the point recessions aren't all bad. There's very often an element of creative destruction going on. So don't panic!

Posted: Fri Jan 23, 2009 4:58 pm
by kurt9
The money people currently spend on electricity would then be free to be spent on other goods and services. Any recession would be very mild and short.

Posted: Fri Jan 23, 2009 5:46 pm
by TallDave
Probably. But even if it were severe and prolonged (and it might be, power generation is a bigger industry than telco), it's important to realize that overall people would be better off because power would be cheaper.

And that is what happens every day in a smaller way throughout the economy. Someone figures out a cheaper better way to do something, some people doing things the old way lose their jobs, overall money flow decreases but overall utility rises.

That's why in real terms the poverty line today is very close to the average income in the 1950s.

Posted: Fri Jan 23, 2009 5:58 pm
by jgarry
IT due to Moore's law has been a disruptive technology. New software does tend to put people out of work, but the resulting productivity gains ultimately drove the economy to produce more jobs. Well at least in the mid to late nineties this happened. Some other effect seems to have offset these gains after about 2001.

Posted: Fri Jan 23, 2009 8:04 pm
by MSimon
jgarry wrote:IT due to Moore's law has been a disruptive technology. New software does tend to put people out of work, but the resulting productivity gains ultimately drove the economy to produce more jobs. Well at least in the mid to late nineties this happened. Some other effect seems to have offset these gains after about 2001.
It is called a secular decline. Profits from using the new technology are no where near what they once were because all the easy stuff has been done.

Some thing that could bring more profits on line: cheaper software (our methods currently cost 10X the best known practices). Unfortunately current methods are too entrenched. No enough pain to take the risk of change - yet.

Some new profit opportunity - biotech - ramps up sufficiently to absorb and generate a lot of new capital.

Posted: Fri Jan 23, 2009 8:11 pm
by jgarry
Moore's law has quietly expired, too. But the pace of tech continues apace. Disk drives have become orders of magnitude larger for instance. No I think the cause for the dramatic decline in job creation lies elsewhere.

Posted: Fri Jan 23, 2009 8:31 pm
by jgarry
ya know this is probably something that has been discussed here, but why hasn't some wealthy individual thrown in with the polywell? Assuming the estimate of ~200 million USD to be correct, why wouldn't someone like Gates put in the money for the opportunity to become unimaginably wealthy? Why is the government the main source of funding?

Posted: Fri Jan 23, 2009 8:38 pm
by MSimon
jgarry wrote:Moore's law has quietly expired, too. But the pace of tech continues apace. Disk drives have become orders of magnitude larger for instance. No I think the cause for the dramatic decline in job creation lies elsewhere.
Moore's law is up against the speed of light. The answer of course is smaller less complex hardware - pipelines? branch predictors? multilevel cache? clocks?

We know how to get rid of them. Make processors smaller. And put 40 of them on a chip with technology that is way behind leading edge. Operating power of milliwatts. Automatic shutdown of cores that have nothing to do.

http://www.intellasys.net/templates/tri ... aSheet.pdf

Oh, yeah. Development of software is way too expensive.

BTW faster hardware and cheaper disks does not improve the profit of automating a given function.

When the computer revolution started an app might generate $100 return for a $5 investment. Now due to the lower costs of computing that investment is now $2 but the return is reduced to $5. A lowering of computing costs to $1 is not going to give us that $90 net we once got. Secular decline. The easy stuff has been done.
The speed, power and size advantages of the SEAforth 40C18 design are ideally suited for today’s high data throughput requirements in a wide range of consumer electronics, networking, automotive and defence applications. The chip is currently in beta testing at leading OEMs. “Power consumption is an extremely critical factor when designing devices for embedded automotive applications,” said Jeff Ota, Advanced Technology Engineering, BMW Technology Office Palo Alto. “While running tests on edge filtering for automotive imaging applications, we found the SEAforth 40C18 delivers advanced filtering capabilities at a fraction of the power consumed by other products available in the market today.”

Featuring the smallest core size design (0.13 mm2), the SEAforth chip consumes 28 times less power while running 240 times faster than competing architectures. The SEAforth 40C18 breaks the memory bottleneck by creating a RAM and ROM on each core. This enables individual cores to run at the full native speed of the silicon instead of being throttled down to a slower external system clock frequency. The automatic synchronisation feature between cores allows the processors to share the computing load by talking to each other to pass data, status signals and even code blocks. When individual CPUs are not active, they automatically shut down or sleep, consuming just 5.4 µW in leakage current until awakened.
http://www.euroasiasemiconductor.com/ne ... hp?id=9809

Posted: Fri Jan 23, 2009 9:36 pm
by Skipjack
Well I think that cheap electricity will actually be an enabling technology. This will create new jobs, though some others will go away. One also should see that a transition would never be instantenious. In contrary it will take decades for it to be done. Cars are usually arround for 10 years, some even longer. So even if electric cars became more even more attractive to the point that the average joes next car also will be an electric vehicle some 15 years might pass. That is a long time for people to adjust to the new sitation.
Anyway as an enabling technology it would allow us to do things and build things that we could not think of before. Existing things could be produced for less money since part of the production cost is the energy required during production (e.g. turning bauxit into aluminium takes lots of energy). If anything it will be beneficial for the economy even in the mid term.

Posted: Sat Jan 24, 2009 3:24 pm
by jgarry
Our software sucks. Way too brittle.
Productivity didn't drop off in 2001, and Moore's law was still in effect until about 2005.
With greater storage you can do things you couldn't previously. I'm able to host a large db right at my desk, when previously I would have had to setup a server and tolerate network latency. The ability to manipulate larger and larger repositories of data is crucial and possibly the main driver in productivity growth, because it enables superior management practices.
That plus there's also the explosion in memory. In the next few years we're going to see Solid state hard drives, which will yield further efficiencies.
I would throw in the flat screen monitor. My career would likely be over without the flatscreen. Much easier on the eyes, and takes less counter space. Less counter space is a big deal in productivity, by the way.
I would say software is actually the determining factor, though. Our brains are far superior to machines and I would speculate that this is because the software has been debugged for eons.

Posted: Sat Jan 24, 2009 3:41 pm
by MSimon
jgarry wrote:Our software sucks. Way too brittle.
Productivity didn't drop off in 2001, and Moore's law was still in effect until about 2005.
With greater storage you can do things you couldn't previously. I'm able to host a large db right at my desk, when previously I would have had to setup a server and tolerate network latency. The ability to manipulate larger and larger repositories of data is crucial and possibly the main driver in productivity growth, because it enables superior management practices.
That plus there's also the explosion in memory. In the next few years we're going to see Solid state hard drives, which will yield further efficiencies.
I would throw in the flat screen monitor. My career would likely be over without the flatscreen. Much easier on the eyes, and takes less counter space. Less counter space is a big deal in productivity, by the way.
I would say software is actually the determining factor, though. Our brains are far superior to machines and I would speculate that this is because the software has been debugged for eons.
With greater storage you can do things you couldn't previously.
And if the code generally produced wasn't so kludgy the extra resources would be unnecessary.

And it is not a matter of raw productivity. It is the ROI.

Posted: Sat Jan 24, 2009 7:49 pm
by hanelyp
MSimon wrote:Moore's law is up against the speed of light. The answer of course is smaller less complex hardware - pipelines? branch predictors? multilevel cache? clocks?

We know how to get rid of them. Make processors smaller. And put 40 of them on a chip with technology that is way behind leading edge. Operating power of milliwatts. Automatic shutdown of cores that have nothing to do.
Deep pipelines and superscalar operations, as used by modern CPUs, produce fantastic speed under ideal conditions. But require horrid complexity, and suffer pipeline stalls if conditions aren't handled just right.

Parallel processors also produce fantastic speed under ideal conditions. But have complexity in both hardware (fairly minor) and software to make good use of them. Most programmers today, and many tools, simply don't know how to use them effectively.

In my own design of code for parallel processing algorithms, I've run into frequent cases where the algorithm needs to ensure exclusive access to a small patch of shared data for just long enough to update an accumulated value. Conventional OS supported locks tend to take many times longer to operate than the update operation. Ideally there would be a single instruction to say something like "lock this paragraph until I write to it (or the lock times out)". Also, the algorithms I've personally worked on that would most benefit from parallel processing work heavily with vectors, something not commonly supported by general purpose CPUs, and benefiting from 128bit and wider words.

Unfortunately, hardware designed to avoid upcoming limits won't run most existing software any faster than current hardware, or even as fast in some cases.

Posted: Sat Jan 24, 2009 8:30 pm
by MSimon
Parallel processors also produce fantastic speed under ideal conditions. But have complexity in both hardware (fairly minor) and software to make good use of them. Most programmers today, and many tools, simply don't know how to use them effectively.
The software for the SeaForth Chip seems to have been designed with parallel processing in mind.

http://www.intellasys.net/templates/tri ... sGuide.pdf

and

http://www.intellasys.net/index.php?opt ... &Itemid=68
VentureForth is the state of the art multicore programming language. It includes compilers for both Windows and Linux and a simulator for debugging.

The VentureForth language is used to program the SEAforth family of multicore processors. It contains low level primitives as well as the high level tools necessary to map programs across the array of cores in a SEAforth processor.

Programs compiled by VentureForth can be run on SEAforth hardware or in the simulator.

The simulator is a great way to debug sections of a project. It accurately simulates the stacks and registers of each core while stepping through a program. It provides shorthand representations of each core and more detailed representations of up to four chosen cores.
The simulator will help you to debug the really tough situations.
It solves the problems in a very slick way. You should have a look.

May I suggest the following papers as an overview?

http://www.intellasys.net/index.php?opt ... &Itemid=43

RF Processing Using SEAforth®
July 15, 2008 - Leslie O. Snively

"Natural Language" Programming of Multicore Computers for Control Engineers
May 23, 2008 - Greg Bailey

SEAforth® In Industrial Control and Sensing Applications
May 22, 2008 - Leslie O. Snively

Interactive Programming and Debugging of Embedded CPU Cores
April 3, 2008 - Greg Bailey

They run from about two to eight pages and none is a tough read.

Posted: Sat Jan 24, 2009 8:34 pm
by MSimon
Conventional OS supported locks tend to take many times longer to operate than the update operation. Ideally there would be a single instruction to say something like "lock this paragraph until I write to it (or the lock times out)".


Built into the hardware.

And yeah. The legacy software is stuck with the crap hardware we have (for the most part).

The way to handle that is to put in a few extra chips on the mother board to handle the transition.

Posted: Sat Jan 24, 2009 8:38 pm
by MSimon
http://www.intellasys.net/templates/tri ... 080408.pdf
Each core runs asynchronously, at the full native speed of the silicon. During interprocessor communication, synchronization happens automatically; the programmer doesn’t have to create synchronization methods. Communication happens between neighbors through dedicated ports. A core waiting for data from a neighbor goes to sleep, dissipating less than one microwatt. Likewise, a core sending data to a neighbor not ready to receive it goes to sleep until that neighbor accepts it.
A wake up occurs instantly upon the rising edge of the synchronizing signal. With the wake up logic controlling power use, there is no need for complex power control strategies. Power is conserved as a natural consequence of good program design. External I/O signals may also be used to wake up sleeping processors. The small size and low power make the SEAforth 40C18 a good value both in terms of MIPS per dollar and MIPS per milliwatt.

I/O ports on the SEAforth 40C18 are highly configurable because they are controlled by firmware. The 4‐wire SPI port, the 2‐wire serial ports, and the single‐bit GPIO ports can be programmed to perform a large variety of functions. With the available processing power, wireless solutions become possible without the need for separate wireless chips. Ports can be programmed to support I2C, I2S, asynchronous serial, or synchronous serial ports. Serial ports can also be used to connect multiple SEAforth S40C18s.

In addition to serial I/O, two nodes have dedicated parallel I/O ports. These can be used for parallel I/O, or when combined, can drive an external memory device.
Now personally I'd like to see deeper stacks and more onboard RAM plus FLASH. But this chip is not a bad start for a family of chips.