Ray Kurzweil, Cyberprophet or Crack-Pot?

Discuss life, the universe, and everything with other members of this site. Get to know your fellow polywell enthusiasts.

Moderators: tonybarry, MSimon

Post Reply
JohnP
Posts: 296
Joined: Mon Jul 09, 2007 3:29 am
Location: Chicago

Ray Kurzweil, Cyberprophet or Crack-Pot?

Post by JohnP »

This is in response to a couple posts in Implications I thought should get a reply here in General.

Anyway, Ray K. makes much of extrapolating exponential & geometric trendlines, esp relating to computer hardware. All right, that's fun. But then he goes into the whole AI thing and I have to roll my eyes.

I've long been at the point where I believe AI is so much gee-whiz baloney. And let's get our definition clear here: strong AI is a generalized, flexible thing you could have a conversation with, and which is way smarter than you are, and which can solve real problems in creative ways.

Sure, the hardware is getting better. But that's irrelevant. If you had the ideas right, you could demonstrate strong AI today, albeit maybe at molasses-in-January speed. Has anyone seen anything like that? I haven't. The notion that strong AI depends on fast hardware is bunk.

So, the theoretical framework is missing a lot of pieces. Why? Smart people have been banging on this for decades and at great expense. Like, um, tokamaks.

All I'm saying, is, the fundamentals of AI have a long long way to go. Till then, Ray K and his like are blowing smoke with their predictions.

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

Well, I'm gonna have to play devil's advocate here.

Singularity doesn't rely on AI, though it speeds things up. All the pro AI arguments aside, Moore's law does have an application in genetic engineering; faster computers make simulation a lot easier. We're just starting to see useful engineered organisms, and I don't doubt that somebody will start doing experiments on 'improving' the human genome soon. So it's certainly not out of the question that we will soon have lots of very, very intelligent people. Which produces the same runaway as the smart AI.

And now back to our regular programming

2edfe9
Posts: 11
Joined: Mon Jun 23, 2008 2:54 pm
Location: Vancouver, BC

Post by 2edfe9 »

One of the most interesting arguments on the software side is that we don't have to understand how intelligence works. All we have to do is run a simulation of the human brain on a powerful enough computer. You don't have to understand something in order to copy it.

Ray K has published some estimates that medical imaging technology should by 2020 (i think that was the date) reach the point where they can image the brain in enough detail that we can produce such a simulation. The scans would give us the high level structure and topography, and we already have a pretty good picture of how neurons behave at the cellular level. Put those together, and wire the result into a virtual computer generated environment, and you <MIGHT> just have your mindseed.
Patrick A

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

Well, two problems in my mind.

1) It's a human mind. So bravo and well done to the researchers, but not what we wanted. We can make human brains already, and they're much less expensive.
2) If I'm wrong about 1), ye gods, what have you done! The last thing that we ever want to do is make a single human vastly more intelligent than everyone else. The friendly AI people are pretty crazy, but they're right to make sure the AI wants to be nice.
The consensus seems to be that people are not nice.

kurt9
Posts: 588
Joined: Mon Oct 15, 2007 4:14 pm
Location: Portland, Oregon, USA

Post by kurt9 »

I work in the semiconductor industry.

Conventional fabrication technology will reach its limits somewhere in the 10nm to 20nm design regime. Various materials and fabrication technologies are being developed so as to carry beyond the 10nm limit. Some of these include stuff like carbon nanotubes, graphene sheets, protein-based self-assembly molecular electronics. One or more of these technologies will come on line around 2020 and will reach the molecular level around 2030.

I believe the molecular limit (reached around 2030) represents an absolute limit for electronics. I do not see anyway of going beyond the molecular level (multiple bits per atom?) although I could be wrong. Thus, Moore's Law will end around 2030. The projections that Kurzweil makes beyond this point do not make much sense to me.

Sentient A.I. is another issue. We will soon have computers that exceed the computational capability of the human brain. We may already have them. However, their architecture is totally different from that of our brains. It is not clear if we will change the design of future computers to make them more like our brains. It is even less likely that there would be any benefit in doing so.

We manufacture and use computers as tools, to do things that our brains cannot do. In other words, computers are used as extensions of ourselves. In this, computers are clearly superior to our brains, but cannot serve as replacement for our brains. It is likely that this trend will continue.

Kurzweil's other prediction concerns nanotech. I believe that "wet" nanotech (biotech, synthetic biology, etc.) is possible and will most certainly be developed. By "wet" nanotech, I mean a nanotechnology based on solution phase chemistry and recognizable principles of molecular biology. Kurzweil, on the other hand, believes in the possibility of "dry" nanotech, which is "machine phase chemistry carried out in vacuum environment. There is a researcher in the U.K. who has recently received funds to verify if this possible. I, personally, do not believe this is possible, although I would like to be proven wrong.

Much of the rest of Kurzweil's book is very speculative at this time.

JohnP
Posts: 296
Joined: Mon Jul 09, 2007 3:29 am
Location: Chicago

Post by JohnP »

Kurzweil, on the other hand, believes in the possibility of "dry" nanotech, which is "machine phase chemistry carried out in vacuum environment.
Work in this direction has been carried out for a number of years, though it's still in an early stage AFAIK. Mechanosynthesis has been performed by pushing atoms together with an STM or AFM till they stick. Using STM you can pick up and put down an atom or small cluster on a surface. From there (I believe) it's degree of sophistication to machine phase chemistry in significant qty.

JohnP
Posts: 296
Joined: Mon Jul 09, 2007 3:29 am
Location: Chicago

Post by JohnP »

JohnSmith wrote:Well, two problems in my mind.

1) It's a human mind. So bravo and well done to the researchers, but not what we wanted. We can make human brains already, and they're much less expensive.
2) If I'm wrong about 1), ye gods, what have you done! The last thing that we ever want to do is make a single human vastly more intelligent than everyone else. The friendly AI people are pretty crazy, but they're right to make sure the AI wants to be nice.
The consensus seems to be that people are not nice.
If a simulated human brain and mind is achieved, it will consume a huge amount of processing (and electric) power. Most likely, at first, it will not function in real time. If it's a monster, it will be a slow, feeble one.

And all you've got is a simulation of something you already have, warts and all. It will be forgetful and cranky. It will suck electricity like there's no tomorrow. It will probably be less physically reliable than a biological brain (the first one will occupy lots of physical space, meaning wiring, interconnects, switches up the wazoo). It might be more durable than a biological brain, and easier to fix. But it won't be no savant.

JohnSmith
Posts: 161
Joined: Fri Aug 01, 2008 3:04 pm
Location: University

Post by JohnSmith »

That was my point. We've already got human brains mass produced by unskilled labor. The 'avalanche' that singularity normally depends on is that if a human brain can make something smarter than itself, so can the smarter mind. A slow, human level brain doesn't help, unless you plan on prodding the insides to see what happens.
Since this would be in essence a human brain stuck without a body, I hope like hell that there would be an outcry against random experimentation.
I doubt it though.

esotERIC D
Posts: 12
Joined: Mon May 05, 2008 2:42 pm
Location: Vermont

Post by esotERIC D »

I came across this today and it seemed to fit into this discussion,

"Frakenrobot has biological brain"

http://dsc.discovery.com/news/2008/08/1 ... brain.html

50,000 to 100,000 active rat neurons on "multi-electrode array" that is able to send and recieve information from its robot 'body'.

kurt9
Posts: 588
Joined: Mon Oct 15, 2007 4:14 pm
Location: Portland, Oregon, USA

Post by kurt9 »

JohnP wrote:
Kurzweil, on the other hand, believes in the possibility of "dry" nanotech, which is "machine phase chemistry carried out in vacuum environment.
Work in this direction has been carried out for a number of years, though it's still in an early stage AFAIK. Mechanosynthesis has been performed by pushing atoms together with an STM or AFM till they stick. Using STM you can pick up and put down an atom or small cluster on a surface. From there (I believe) it's degree of sophistication to machine phase chemistry in significant qty.
The problem with "dry" nanotech is scalability. Yes, one can make things one atom at a time with SPM. One can even use an array of SPM tips to make things in parallel. However, there is no obvious path to scale up to making complicated things such as airplanes or human beings. Biology has levels of complexity in self order going from individual molecules, to sub-cellular structures, cellular structures, up to multi-cellular organization. Each of these levels of self-assembly has its own set of reactions that allows for the construction of the next level of structure.

The other issue is that even if dry nanotech turns out to be possible, it will be limited to the use of a few covalently bonded elements such as Carbon, Boron, and a few others. Biological system make use of a multitude of the elements in the period table to make structures of complexity that the mechano-chemistry people do not even talk about. This is because solution-phase chemistry offers a far greater range of potential reactions than any "vacuum phase" chemistry could ever do.

I remain unconvinced of the feasibility of "dry" nanotech (although I would like to be proven wrong).

BSPhysics
Posts: 50
Joined: Sat Jan 05, 2008 12:17 am

Post by BSPhysics »

One of Ray K's points of emphasis is the combination of machine and biological intelligence/technology. We are arguing over whether the future will be "wet" or "dry." The reality is probably "damp."

Kurt9, you mentioned you work in the semiconductor field. Ray K mentions 3D architecture to chips, nanotube wires, and increasing numbers of processors in parallel to extend the Moore's Law curve without continually shrinking chip dimensions. Is this the part you have trouble buying?

The software side of things is the real bottleneck in this Singularity scenario. Especially if you consider future computers containing hundreds, thousands, or even millions of processors. Here is what IBM is doing on the hardware/software, wet/dry roadblock.

http://bluebrain.epfl.ch/page17871.html

Definitely check out the gallery and videos on the left side of the page.

I'm curious how Ray K's perception of the future would change if BFR economies sprang up in the next decade.


BS

kurt9
Posts: 588
Joined: Mon Oct 15, 2007 4:14 pm
Location: Portland, Oregon, USA

Post by kurt9 »

BSPhysics wrote:One of Ray K's points of emphasis is the combination of machine and biological intelligence/technology. We are arguing over whether the future will be "wet" or "dry." The reality is probably "damp."

Kurt9, you mentioned you work in the semiconductor field. Ray K mentions 3D architecture to chips, nanotube wires, and increasing numbers of processors in parallel to extend the Moore's Law curve without continually shrinking chip dimensions. Is this the part you have trouble buying?
No. These are all technologies and processes that are under development. My comment related exclusively to information storage density, which will reach the molecular level in about 20 years. Once this level is reached, it is difficult to see how the density can increase beyond this point.

Performance can be increased without shrinking chip dimensions. However, there is a limit here as well.

JoeStrout
Site Admin
Posts: 284
Joined: Tue Jun 26, 2007 7:40 pm
Location: Fort Collins, CO, USA
Contact:

Post by JoeStrout »

Here's a thread I can't resist jumping into. Many good points have already been made, but here are a few worth emphasizing:

1. The term "Singularity" means different things to different people, but most would agree that when mind uploading (see also the Wikipedia entry) is developed, that will count. Why? ...

1a. Because at that point we have people living in artificial bodies, or in entirely artificial realities, some of which will be running faster than real time.

1b. Because those artificial bodies, being artifacts, will be adaptable to a wide variety of environments, from deep ocean to empty space. People will cavort in shorts on the Moon, swim like merfolk in the seas, and so on.

1c. Because personal duplication will suddenly be possible, with all the implications that has both for identity and for practical daily life. (For example, some people will decide it's a great idea to duplicate themselves over and over and see if they can fill up the world with their own copies; what, if anything, will we do about that?)

1d. And oh yes, it means the elimination of almost all death.

1e. Because of 1a-1d, it's clear that the post-uploading future is going to be a very strange place, far stranger than most today can imagine, and that it will be changing very rapidly as technology evolves (which will be even faster than usual due to 1a).

2. Kurzweil's arguments about processing power are mainly to point out that we'll have sufficient processing power to run an uploaded brain within the text couple of decades.

2a. Note that minimum element size isn't really the point here. The "real" Moore's law — the one that matters — is processing power per unit cost. There is no reason to think that will bottom out when we reach molecular computing; economies of scale and improvements in fabrication techniques will continue to advance.

3. That leaves software. But as a former computational neuroscientist myself, I can tell you that Kurzweil is right about this: you don't have to be able to program a strong AI in order to emulate a brain. You only need to understand the individual elements (mainly neurons) well enough to emulate them, and have some way of mapping all the elements and connections in a real brain. Both of these are very active subfields of neuroscience research, which is one of the biggest and most well-funded areas of scientific endeavor today. Automatic neural mapping is far more advanced than it was when I was in grad school only a decade ago, and experiencing exponential growth.

My own estimate for mind uploading is around 2080 or so, but that's purposely conservative — Kurzweil's estimate of 2020 seems too aggressive to me, but not ridiculously so. The reality will probably fall somewhere in between. And at that point, everything changes so profoundly that we may as well call that point the Singularity.

Best,
- Joe
Joe Strout
Talk-Polywell.org site administrator

Jeff Peachman
Posts: 69
Joined: Fri Jun 13, 2008 2:47 pm

Post by Jeff Peachman »

I'm not qualified to comment on whether Ray K is right or not. He's extremely intelligent, but that doesn't mean he's right.

But if he is right about it being feasible, then I'm having trouble accepting that 'the masses' will allow the singularity to happen as he imagines it. If almost half the country is against abortion today, imagine the outcry over mind uploading!

I know there have always been Luddites, but the Singularity by definition* is when the world is changing so fast that the future is completely uncertain. (For *, see JoeStrout's point #1).

This is a level of future shock that is going to blow people away. I don't think society will let it happen that quickly.
- Jeff Peachman

BSPhysics
Posts: 50
Joined: Sat Jan 05, 2008 12:17 am

Post by BSPhysics »

Joe and Jeff, awesome points. I learn a lot in this forum. Kurzweil's Singularity moment is the year 2039 IIRC. When will complete mind uploading occur? Technically speaking it already is. My ideas in this forum are being recorded indefinitely as I write. Obviously this is teensy capability compared to the issue being discussed. But as mentioned, the capability grows exponentially. There will not be a sudden "uploading" moment when prior to there is zero capability and then, BAM, massive worldwide mind duplication. The public will gradually buy into this technology in baby steps until it matures to sci -fi capability.

Here's are great example of the brain being connected to the internet.

http://www.youtube.com/watch?v=0g_RQZ-ntSw

BS

Post Reply