Ray Kurzweil, Cyberprophet or Crack-Pot?

Discuss life, the universe, and everything with other members of this site. Get to know your fellow polywell enthusiasts.

Moderators: tonybarry, MSimon

Post Reply
MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

The problem is that the ME you are is dependent on your environment. We are not a unitary "I". That is an illusion. We are a collection of "I"s. Each called up depending on circumstances. See "Societies of The Mind" by Marvin Minsky. Or for a cruder work that predates that by about 100 years the work of Ouspensky describing the philosophy of Gurdjieff. Also delineated in the Arica System. BTW the work of Ouspensky appears to have a lot of crackpottery in it. However, the multiple I's corresponds to the work of Minsky.

When the I's do not inter co-ordinate well you get the multiple personality problem.

So the question of who is "Minsky" can only be answered by "when?" and "where?".

This corresponds to the problem of domains in AI.
Engineering is the art of making what you want from what you can get at a profit.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

BTW most people ARE zombies and don't know it. i.e. they follow internal programs without much thought or interaction with their environment. Heaven help us if the "in love" I takes over when we need he "crossing the street" I.

As the Zen guys have been telling us for millennia: it is hard to be awake.
Engineering is the art of making what you want from what you can get at a profit.

TallDave
Posts: 3141
Joined: Wed Jul 25, 2007 7:12 pm
Contact:

Post by TallDave »

Even if two people were physically identical and experienced the exact same things their entire lives, they would still be two different selves
True, it's not physically possible for them to be the same self - pauli's exclusion principle prevents them from having exactly the same perspective and the uncertainty principle says they wouldn't have the same experience anyway.

What might just barely be possible is to create an exact copy in a different location -- which would cease to be to same person at that instant.
All that could conceivably happen without the actual experiencing part.
It IS the experiencing part. Experience is the process of assimilating data.
that there is no conceptual distinction between the operations of the brain and the actual awareness that accompanies them.
Again, unless you're resorting to mysticism, I don't see how this is even arguable. Where else would it arise from, if not from the brain's operation?
If you're talking about something complex enough to imitate human consciousness --- I'm not. Human consciousness cannot be "imitated" because it is not phenomenological - there's nothing to imitate. Human BEHAVIOUR, on the other hand...
Fine, then, behavior, the outward evidence of consciousness. But don't you see that still requires something as complex, if not more complex (because it knows it's only imitating), as a human consciousness, otherwise we could easily distinguish the two?

And in any case consciousness IS a phenomenon -- a process of massive information processing while juggling large numbers of competing compulsions. Saying this is "not phenomological" is mysticism. This is what most people who argue against mind uploading do: they make a assertion of mystical properties. Mystical assertions aren't falsifiable, so that generally ends the argument. It's just like saying "things are like that because God made them that way" rather than exploring the physics.

It's also just plain inaccurate. One might just as well argue that the process of computers executing instructions is not phenomonological, and only the "behavior" of the output on the screen is.
How do you know any of those are conscious? Even the cat? I'm pretty sure a thermostat is NOT conscious; it's just a thing, not a self.
You don't, but a cat demonstrates behavior that would tend to indicate a level of consciousness, just as you do. A fly shows less of them, a bacterium even less, and a thermostat is one of the simplest reactions to environment. Does a fly have "self?" My cats are certainly aware of themselves, in a limited fashion. Thus again we see self-awareness appears to arise from higher complexity of information processing ( a necessary but not a sufficient condition).
Uh... surely I don't need to point out that people are indeed often (maybe always) conscious while sleeping.
It doesn't matter if they're conscious 99.99% of the time. As long as we agree there are circumstances in which you are not conscious (even just periods of being knocked unconscious through physical trauma), then we agree your identity has temporal gaps.
In any case, the important thing is that you wake up the next morning with your selfhood intact.
But you don't. If you experienced a dream, you wake up as a slightly different person. If your unconsciousness was the result of head trauma, you might wake up a VERY different person, with altered compulsions and priorities and poorer information processing. None of this any different from being destroyed and recreated, as far as self is concerned. We like to think its different, because we carry a compulsion for self-preservation, but that's just vanity.
Okay. What good does that do ME?
Very little, or a lot, depending on whether you understand yourself as a physical construct or a pattern of memories and heuristics. The "you" that wrote this has already ceased to exist, though a very similar copy persists as the "you" of now.

Here's the rub: If we make a copy of that "you" and instantiate him now, he is just much that former "you" as your current physical self. In fact, probably more so.

Identity is fleeting. Again, it's only our programming that assigns importance to the physical construct and the illusion of continuity. We are a lonely series of selves that dies again and again, but that is the stark reality we inhabit.
I personally think instantaneous identity is akin to solipsism in terms of philosophical usefulness, as well as in likelihood of being correct. It's certainly not scientific, if that's what you're after...
It's quite scientific. It's the most accurate description of reality.
Last edited by TallDave on Wed Aug 20, 2008 5:32 pm, edited 5 times in total.
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...

gblaze42
Posts: 227
Joined: Mon Jul 30, 2007 8:04 pm

Post by gblaze42 »

MSimon wrote:BTW most people ARE zombies and don't know it. i.e. they follow internal programs without much thought or interaction with their environment. Heaven help us if the "in love" I takes over when we need he "crossing the street" I.

As the Zen guys have been telling us for millennia: it is hard to be awake.
You are so correct Simon! this is why I think 'self-aware" is going to be so tough, most people are not fully self-aware of their own actions. how can we make an A.I self aware?

TallDave
Posts: 3141
Joined: Wed Jul 25, 2007 7:12 pm
Contact:

Post by TallDave »

Self-awareness implies one of two things: knowledge of our physical existence, or knowledge of our own heuristics. Most mammals seem to grasp the former, quite a few people seem to lack the latter. Both are conceptually easy to program.
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...

gblaze42
Posts: 227
Joined: Mon Jul 30, 2007 8:04 pm

Post by gblaze42 »

TallDave wrote:Self-awareness implies one of two things: knowledge of our physical existence, or knowledge of our own heuristics. Most mammals seem to grasp the former, quite a few people seem to lack the latter. Both are conceptually easy to program.
It is? if it was easy, why aren't there self-aware computers?

TallDave
Posts: 3141
Joined: Wed Jul 25, 2007 7:12 pm
Contact:

Post by TallDave »

It's conceptually easy. Actually coding a high level of self-awareness would require a large amount of programming effort, processing power, and input devices with high throughput.

Computers do already monitor and modify their own code to some extent. Anti-virus programs could be said to be a primitive form of computer self-awareness, sort of a simplistic imitation of a person recognizing and correcting a bad habit.

And anyone who has plugged in a plug-and-play USB device has seen a computer demonstrate a primitive physical self-awareness.

The problem with self-awareness as usually defined is that it presupposes a state of being that includes the huge mass of competing compulsions and sesnory input we humans are constantly balancing and figuring ways to satisfy, which is a task a few orders of magnitude more complex than any computer around today can handle, even before we talk about how we try to reprogram ourselves. Computers of today generally have a relatively few well-defined tasks to perform and limited inputs.

In 20 years or so, though...
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...

kurt9
Posts: 589
Joined: Mon Oct 15, 2007 4:14 pm
Location: Portland, Oregon, USA

Post by kurt9 »

TallDave wrote:It's conceptually easy. Actually coding a high level of self-awareness would require a large amount of programming effort, processing power, and input devices with high throughput.

Computers do already monitor and modify their own code to some extent. Anti-virus programs could be said to be a primitive form of computer self-awareness, sort of a simplistic imitation of a person recognizing and correcting a bad habit.

And anyone who has plugged in a plug-and-play USB device has seen a computer demonstrate a primitive physical self-awareness.

The problem with self-awareness as usually defined is that it presupposes a state of being that includes the huge mass of competing compulsions and sesnory input we humans are constantly balancing and figuring ways to satisfy, which is a task a few orders of magnitude more complex than any computer around today can handle, even before we talk about how we try to reprogram ourselves. Computers of today generally have a relatively few well-defined tasks to perform and limited inputs.

In 20 years or so, though...
Is it not likely that machine sentience will represent a sort of limitation of computational capability of the system? After all, much of the computational resources of such a system will be used to maintain a sense of self rather than used for computational tasks.

ZenDraken
Posts: 22
Joined: Wed Feb 20, 2008 5:14 pm
Location: Pacific NW

Post by ZenDraken »

TallDave wrote:It doesn't matter if they're conscious 99.99% of the time. As long as we agree there are circumstances in which you are not conscious (even just periods of being knocked unconscious through physical trauma), then we agree your identity has temporal gaps.
Every instant we are a slightly different person, and this is a *requirement* for consciousness. If our brains were perfectly static, we would be as conscious as a stone. This may be why Bhuddists say "self" is an illusion. "Self" is continuously vanishing and being replaced by another.

Consciousness is fundamentally dynamic, there is a sense of constant movement through time. This fourth-dimensional movement of consciousness is profoundly important to understanding both consciousness and time. We can't escape time, but we can "skip ahead", as when we sleep. Sometimes time seems to move slower or faster, depending on our state of consciousness. In any event, consciousness is clearly bound to time in some way. Time itself might be a phenomenon of consciousness, but I can hardly prove it.

I wanted to throw another possibility out there: The typical assumption is that the mind is fully and completely realized by the activity of the brain. This is reasonable but it assumes that there can be nothing external to the physical brain. Consider this alternative: the brain is a massively fractal antenna, picking up consciousness from somewhere else (another dimension?). Granted, this just puts the problem elsewhere, but it points out that there are alternatives to the brain-only model.

This may all sound "mystical", but it's no more mystical than strings, branes, or Loop Quantum Gravity.

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

Heh, I've heard the antenna theory before. While it does tend to sound like mysticism, I'd love it if it were true. All kinds of interesting possibilities to fiddle with. Could you build a brain jammer?

I think the 'Prestige' argument is the best one. Questions of identity aside, one of you will die. I'm guessing, since you did upload your brain, you were worried about that. What if it's you?

I can't understand why you wouldn't try for continuous instead. Gradually replace biological brain function with computer, until you're almost entirely computer based. It avoids the problem entirely.

TallDave
Posts: 3141
Joined: Wed Jul 25, 2007 7:12 pm
Contact:

Post by TallDave »

kurt9 wrote:Is it not likely that machine sentience will represent a sort of limitation of computational capability of the system? After all, much of the computational resources of such a system will be used to maintain a sense of self rather than used for computational tasks.
I don't know if those are necessarily separate functions. Consider ourselves: most of our brain is busy doing computational tasks like visual/aural/taste/smell processing, keeping us from falling over, thinking about sex, worrying about taxes, etc. All of those things are components of self; a relatively small part of the brain spends time thinking about our self's identity.
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...

TallDave
Posts: 3141
Joined: Wed Jul 25, 2007 7:12 pm
Contact:

Post by TallDave »

I think the 'Prestige' argument is the best one. Questions of identity aside, one of you will die. I'm guessing, since you did upload your brain, you were worried about that. What if it's you?
That used to bother me a lot, until I realized the "I" of now isn't persistent anyway.
I can't understand why you wouldn't try for continuous instead. Gradually replace biological brain function with computer, until you're almost entirely computer based. It avoids the problem entirely.
Nothing wrong with that. I think that one should, if at all possible, avoid the Prestige situation by only instantiating copies that will not experience "lost time" or "about to die" time; i.e., if they are destroyed they must be re-instantiated with exactly the state and memories of their last instant. To do otherwise would be unethical.
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Every instant we are a slightly different person, and this is a *requirement* for consciousness. If our brains were perfectly static, we would be as conscious as a stone. This may be why Bhuddists say "self" is an illusion. "Self" is continuously vanishing and being replaced by another.
No. What the Buddhists mean is that we are a collection of programs triggered by events. i.e. there is no gentle flow. There are big jumps. Sometimes very big jumps.

The "cure" for this is to install a "watcher" that can follow the change in programs. Once the watcher is functioning then you can get a "controller" that can remove the calling from the automatic to the autonomous.

Personal example: I used to get frightened (due to some unfortunate experiences) when some one would come up behind me on my right side. Didn't happen on the left or the center. First I cultivated "awareness" then when I was aware I learned control. Now there has to be real menace in the movement for fear to manifest.

Minsky's "Societies of the Mind" is really a critical read. It really explains more than any other theory of how consciousness operates.

If you study another person very carefully you can notice the changes. The body will be held different. Certain facial expressions will manifest on the change. etc. Poker players call these "tells". A good sales person can learn your tells quickly and move you to the buy decision. Which is one reason we prefer to have help on the sales floor and not salesmen.

It is also the reason that with large purchases you are given 3 days to consider what you have done. Did you really want to buy or was the sales person manipulating you through observation of your tells?

What the Buddhists suggest is that you learn your own tells. Also learn your major personalities. There are usually two or three that determine your behavior under most circumstances.

There is the business personality, the sales personality, the home personality, the friend personality, the father personality, the son personality, etc. Each responds differently to environmental cues.

Edit:

On further examination it was some one approaching from the left rear. Not that it matters, but I prefer to get the correct facts out.
Last edited by MSimon on Thu Aug 21, 2008 9:02 pm, edited 1 time in total.
Engineering is the art of making what you want from what you can get at a profit.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

The "lost time" problem happens when the different personalities do not communicate with each other. This is a dysfunction. i.e. you can't remember what happened to personality A when you are in personality B.

The Buddhist also call this "going around the mountain". The Gurdjieff school calls this lack of will. i.e. the inability to maintain intention as the various personalities come up. So you wind up with a series of intentions and the effort is not unified. Often the intentions are contradictory. Thus "going around the mountain".

Since engineers have to maintain intention to accomplish something, engineering is a very good corrective for the problem.
Engineering is the art of making what you want from what you can get at a profit.

JoeStrout
Site Admin
Posts: 284
Joined: Tue Jun 26, 2007 7:40 pm
Location: Fort Collins, CO, USA
Contact:

Post by JoeStrout »

93143 wrote:I've never really understood why people are so excited about "mind uploading". I mean, so you upload your exact neural network pattern into a machine capable of emulating its function. Great. There is now an exact functional copy of your brain, which is effectively immortal.

So what? Is that really 'you' or is it just somebody else who thinks he is?
It's really "you". Personal identity is defined by the contents of your mind, including all your memories, personality traits, etc. If you make an exact functional duplicate of the mind, then you have duplicated the identity. This is "weird" at first because it's never been possible to duplicate a person in the past, so we're used to equating identity with physical body. But philosophically, that argument doesn't hold water; it breaks down under even casual examination. An exact copy of you really is "you", and we'll quickly get used to that once it becomes possible to do.
Joe Strout
Talk-Polywell.org site administrator

Post Reply