Ray Kurzweil, Cyberprophet or Crack-Pot?

Discuss life, the universe, and everything with other members of this site. Get to know your fellow polywell enthusiasts.

Moderators: tonybarry, MSimon

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Duane,

People without emotional appetites can't make decisions.

Appetites drive human learning. I want to feel good. I'm hungry. I'm cold. I'm hot.

I don't know how you handle the safeguard question.

Once we get there it probably will be like nothing we can imagine today.
Engineering is the art of making what you want from what you can get at a profit.

djolds1
Posts: 1296
Joined: Fri Jul 13, 2007 8:03 am

Post by djolds1 »

MSimon wrote:Duane,

People without emotional appetites can't make decisions.

Appetites drive human learning. I want to feel good. I'm hungry. I'm cold. I'm hot.
Not according to Hawkins, and he's smarter than either of us wrt information theory and neuroscience.
MSimon wrote:I don't know how you handle the safeguard question.
Frank Herbert called it the Butlerian Jihad.
MSimon wrote:Once we get there it probably will be like nothing we can imagine today.
The Hawkins model of AI is unlike 99% of our AI imaginings.

Duane
Vae Victis

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

djolds1 wrote: Actual AI is different than scifi imagines. It does not require emotional impetus as commonly described. It can easily function as pure intellect without any form of ambition.

<Snip>

But not giving AI emotional appetite would be wise. AI with it is an existential threat to the human species. I'm all in favor of capable tools. Becoming pets or vermin to SAI (Sapient AI) is another thing.

Duane
My question is, how would you know if such an AI had actually become sentient? It definitely wouldn't pass the turning test.

And another question, based off the little I've read about 'friendly AI.' How can you reduce 'be nice' to math?

gblaze42
Posts: 227
Joined: Mon Jul 30, 2007 8:04 pm

Post by gblaze42 »

djolds1 wrote:
MSimon wrote:Duane,

People without emotional appetites can't make decisions.

Appetites drive human learning. I want to feel good. I'm hungry. I'm cold. I'm hot.
Not according to Hawkins, and he's smarter than either of us wrt information theory and neuroscience.
MSimon wrote:I don't know how you handle the safeguard question.
Frank Herbert called it the Butlerian Jihad.
MSimon wrote:Once we get there it probably will be like nothing we can imagine today.
The Hawkins model of AI is unlike 99% of our AI imaginings.

Duane
I have no doubt Hawkins is intelligent, probably much more than I will ever be, but I don't believe it's about who the smartest is, Often people try to quantize things that aren't quantize-able and in the process overlook an important piece of the puzzle.
Honestly, only looking at the only true "hard" intelligence, us, and try and see how it came about will we understand how "hard" AI can come around.
I think that's why we've created expert systems that are better than the experts or neural nets that learn quicker than humans, but I think thats also why we seem to be missing the true "hard" AI.

Just my opinion of course, I could easily be missing something.

djolds1
Posts: 1296
Joined: Fri Jul 13, 2007 8:03 am

Post by djolds1 »

JohnSmith wrote:My question is, how would you know if such an AI had actually become sentient? It definitely wouldn't pass the turning test.


No, it wouldn't. But neither would a rat. And a rat is intelligent to a degree; sentient, though not sapient.

http://en.wikipedia.org/wiki/Memory-pre ... _framework
JohnSmith wrote:And another question, based off the little I've read about 'friendly AI.' How can you reduce 'be nice' to math?
You don't.

It might be possible to build the ur-fears and basic emotional responses into an artificial midbrain, but the specifics of "be nice" are too contingent. Even if neuroscience enables us to "fix" behaviorally deviant criminals (frex, rapists), we will probably only be mildly resetting the grossest impulses.

By cutting out the emotive midbrain you render the question moot. Neither nasty nor nice applies, only predictive intellect.

Duane
Vae Victis

Betruger
Posts: 2321
Joined: Tue May 06, 2008 11:54 am

Post by Betruger »

djolds1 wrote: Building the complex web of analogy associations and templates in the cortex, associations and templates that make up an adult mind, takes TIME.
Wouldn't AI not take so long to do this, in our own time perspective, considering how fast the substrate it'd be running on would allow it to think?
MSimon wrote:I don't know how you handle the safeguard question.

Once we get there it probably will be like nothing we can imagine today.
Why is a simple piecewise safeguard like Asimov's rules no good?

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Duane,

Emotions and decision making:

http://changingminds.org/explanations/e ... cision.htm
Neuroscientist Antonio Damasio studied people who had received brain injuries that had had one specific effect: to damage that part of the brain where emotions are generated. In all other respects they seemed normal - they just lost the ability to feel emotions.

The interesting thing he found was that their ability to make decisions was seriously impaired. They could logically describe what they should be doing, in practice they found it very difficult to make decisions about where to live, what to eat, etc.


http://www.usatoday.com/tech/science/di ... tudy_x.htm
The evidence has been piling up throughout history, and now neuroscientists have proved it's true: The brain's wiring emphatically relies on emotion over intellect in decision-making.

A brain-imaging study reported in the current Science examines "framing," a hot topic among psychologists, economists and political hucksters.

Framing studies have shown that how a question is posed — think negative ads, for instance — skews decision-making. But no one showed exactly how this effect worked in the human brain until the brain-imaging study led by Benedetto De Martino of University College London.
In fact, people who lack emotions because of brain injuries often have difficulty making decisions at all, notes Damasio. The brain stores emotional memories of past decisions, and those are what drive people's choices in life, he suggests. "What makes you and me 'rational' is not suppressing our emotions, but tempering them in a positive way," he says.
http://paul-baxter.blogspot.com/2007/02 ... aking.html
Notes on "Emotion, Decision making and the Orbitofrontal Cortex", A. Bechara, H. Damasio, A. Damasio (2000), Cerebral Cortex, vol 10, p295-307.

The Somatic marker hypothesis (defined by Damasio, 1994 and 1996) says that a defect in emotion and feeling has a detrimental effect on decision making - it also proposes a number of brain structures thought to underlie this effect. Emotions in this theory are defined to be 'somatic states' as they are said to be primarily represented in the brain by "transient changes in the activity pattern of somatosensory structures" ('somatic' essentially refers to the internal environment). This paper looks at this theory, focusing particularly on the role of the orbitofrontal cortex and its interaction with the emotion regions of the brain, in addition to a discussion of the relationship between these two distinct functions (decision making and emotion) and the cognitive function of working memory.
That should be enough to get you started. Hawkins may be too smart by half. i.e. outside his narrow interests there is a LOT of ignorance.

Which is why I have a lot of scepticism about experts.

For instance I came to my conclusions about drug use, brain chemistry, genetics, and trauma several years before the NIDA came even half way towards my position. I see patterns invisible to others. It is due to my mild schizophrenia. OTOH I'm prone to wildly off base speculations. You have to watch me carefully. OTOH I'm never dull. :-)

Luckily I am reasonably well trained in the scientific method which keeps me away from zero point energy etc. and until recently made me a sceptic of Cold Fusion.
Last edited by MSimon on Sun Aug 17, 2008 7:14 pm, edited 1 time in total.
Engineering is the art of making what you want from what you can get at a profit.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Betruger wrote:
djolds1 wrote: Building the complex web of analogy associations and templates in the cortex, associations and templates that make up an adult mind, takes TIME.
Wouldn't AI not take so long to do this, in our own time perspective, considering how fast the substrate it'd be running on would allow it to think?
MSimon wrote:I don't know how you handle the safeguard question.

Once we get there it probably will be like nothing we can imagine today.
Why is a simple piecewise safeguard like Asimov's rules no good?
If it was really good it wouldn't have been the basis for so many stories. The trouble is rules conflict.
Engineering is the art of making what you want from what you can get at a profit.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

I think the discussion argues for a human in the loop. No autonomous systems.

AI keeps the aircraft in the air. A human has to pull the trigger.
Engineering is the art of making what you want from what you can get at a profit.

zbarlici
Posts: 247
Joined: Tue Jul 17, 2007 2:23 am
Location: winnipeg, canada

Post by zbarlici »

check this out.

electronic - organic brain.

http://www.physorg.com/news137852322.html


Check out the video embedded within the article... can you imagine how eerie it'll be when we learn to breed complex networks of neurons in that way, and they teach these biological brains to learn speech ; ; ; shudder... :?


but also AWESOME!!!!!!!!!!!!

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

It'll be a while. I'm working with a group trying to get 1000 electrodes to stimulate nerves which go to the brain via the tongue, and we'd like to do 100,000. Using a biological brain to understand electronic sensors makes more sense than trying to build a brain from scratch. It's kinda hard to beat 2 billion years of evolution.

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

Except that using a biological system as a shortcut brings back the problem of control, doesn't it?

I just had a thought, though. I wonder how much instinct is left in cultivated neurons and neural networks? If there's plenty left, maybe using a dog's neurons would be a good idea. Some breeds are insanely loyal, even after they've grown to the point that they could easily kill the owner.

JoeStrout
Site Admin
Posts: 284
Joined: Tue Jun 26, 2007 7:40 pm
Location: Fort Collins, CO, USA
Contact:

Post by JoeStrout »

MSimon wrote:
JoeStrout wrote:Sure, but artificial senses are easy. It's doing something intelligent with all that data that is hard.
Joe, you left out emotion which seems to be tied in to intelligence. People without emotion can't make decisions. It is in the literature.
Yes, but artificial emotions are pretty easy too. Again, the hard part is doing something sensible about whatever you're feeling (and sensing).

Best,
- Joe
Joe Strout
Talk-Polywell.org site administrator

JoeStrout
Site Admin
Posts: 284
Joined: Tue Jun 26, 2007 7:40 pm
Location: Fort Collins, CO, USA
Contact:

Post by JoeStrout »

MSimon wrote:I think the discussion argues for a human in the loop. No autonomous systems.

AI keeps the aircraft in the air. A human has to pull the trigger.
I tend to agree — unfortunately, there are already places (Korea IIRC) where armed bots are given complete autonomy. That seems like an insanely bad idea to me... quite apart from the Skynet scenario, simple software bugs make it a bad idea to give any machine more destructive power than absolutely necessary. (A CNC machine will cheerfully drill your hand if you're stupid enough to get in the way, but at least it's not mobile, and lacks any projectile weapons.)

When it comes to building truly intelligent, Turing-level machines, there should always be some dumb, infallible, physical way to terminate it — like yanking the plug. But the concern is that if the AI is very much smarter than we are, it will be able to manipulate us to discover and remove any such Achilles' heels, and we won't realize what it's doing until it's too late.

Still, I'm reasonably optimistic that such doomsday AI scenarios won't come to pass — from-scratch AI has proven nicely difficult, and uploaded minds won't be any smarter than the originals (though they may be able to run faster as hardware continues to improve, but that's a minor advantage). And uploaded people will really be "us" rather than "them" anyway.

Best,
- Joe
Joe Strout
Talk-Polywell.org site administrator

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Joe,

I like your points. Good food for thought.
Engineering is the art of making what you want from what you can get at a profit.

Post Reply