Sunday, July 24, 2005

Ian Pearson's concept of the Singularity

Not too long ago, Ian Pearson (well known futurologist) claimed that brain uploads would be possible around 2050 for rich people, and 25 years later, in 2075, for people with standard income.

Here's one of the articles that talk about it:

http://edition.cnn.com/2005/TECH/05/23/brain.download/

This claim caused many people to post all sorts of uninformed opinions. Because of a lack of understanding of the technology that humanity will soon posess, posters naturally assumed that Pearson's claim was ridiculous.

I, as well, find Pearson's claim ridiculous. But for another reason. And that is because Pearson's claim is ridiculously conservative. I emailed him about this, and I am now posting my conversation with him. Perhaps it can lead to interesting conversations in the comments-section of this blogpost. Even though Pearson is probably completely wrong in his predictions, it is still stimulating to read about someone else's view on the Singularity.


========== My Email To Pearson ==========

Dear Ian,

I consider myself to be a Singulatarian, and I'm always scouring the net for articles regarding anything transhumanism-related. I have also read many of your futuristic writings. Although they are completely devoid of any argument, I enjoy them anyway, because I am familiar (thanks to Kurzweil) with the 'exponential' line of reasoning behind the future you advocate.

However, I feel that you do not completely comprehend the sheer vastness of the Singularity, aka the creation of superhuman AI.

You state that we may create SAI before 2020, yet you go on to predict that uploading won't be here until 2050 for the rich, and it will be here 20 years later for the common folk.

I have read Yudkowsky's "Staring Into The Singularity", and it has given me an understanding of the vastness of the Singularity. Not that my puny human intellect could ever grasp anything that is out of reach by many orders of magnitude... but it DID give me a realistic sense of what the Singularity implies.

Staring Into The Singularity: http://www.sysopmind.com/singularity.html

To me, it seems more likely that uploading, and every other thing we could ever dream up, will be available a fraction of a nanosecond after the Singularity has hit. Completely for free, ofcourse. Running an entire planet smoothly is a triviality to a mind billions of times faster AND smarter than us.

I am assuming a friendly and humane SAI as I write this. Why would a friendly SAI let us suffer through 40 hour work weeks so we can barely pay our way through life? It does not make sense. It doesn't make sense to think there will still be an economy once the Singularity has hit. That whole concept will have vanished. Nor does it make sense to think the SAI that constitutes the Singularity will adapt itself to our slow physical world. It would be more logical to assume the SAI will create it's own society in VE. And since it is also logical to assume the SAI will be running on a substrate billions of times faster than our subjective experience of time, we can safely say that the Singularity will render our society IRRELEVANT immediately.

Since we are not super intelligent, we cannot think the thoughts that an SAI might think. Therefore, we cannot predict what happens beyond the big S. That is why it's called the Singularity in the first place. Our model of the future breaks down at the technological Singularity, just as our model of physics breaks down at the center of a black hole.

I am looking forward to your feedback on this.

Sincerely,

Jay


========== Pearson's Reply ==========

Hi Jay, I think about the singularity regularly. I don't make all my arguments explicit becaue I only have limited time to write, but of course I don't reach my conclusions out of thin air. There are other limits to the system than just intelligence. We may have vastly superhuman intelligence at our disposal, but there are still basic problems of physics that aren't circumvented by it. Even if we are able to design superior technology, it would still take time to produce it, even if it is done by nanoassemblers etc, since these too would have to be assembled first etc. Human regulation also slows things down tremendously, and I certainlyu would resist free-run of tech beyond human control. A managed tech development will be much slower than a theoreticakl singularity whihc means tht although we would have huge intellectual capability, things will still proceed at human rates, so the singularity will not be instantaneous, but spread out over decades. Sorry if that sounds dull, but if you try to push technology at computer speeds in some sort of James Bond villain scenario, you would simply be wiped off the planet by humans who have perfectly adequate weapon systems long before you get a chance to build the ones designed by any superhuman power. If you do it in a managed way, it will happen, but will take longer.

Ian


========== My Reply ==========

Hi Ian,

Basic problems of physics and human management to limit the intelligence explosion of an SAI?

An SAI will not allow itself to be limited by either.

As you have stated yourself, we are only a few years away from having CPU's faster than the human brain. What happens if computers are doing the research, and their speed keeps on doubling? I'm not even taking into account here that such a machine would not just improve on its own speed, but also the quality of its thought. At first it doubles in a year. Then it does a subjective year of work in half a year. Then in 3 months. Boom.... Singularity. So much for the physics argument. An SAI going at millions of years per second will render our 50.000 year old species irrelevant on the spot. So much for the human management argument.

Pushing technology at computer speeds is exactly what is the whole POINT of the Singularity. What you need is a friendly, humane SAI, that has a very well defined goal. An SAI will NOT do anything OTHER than try to achieve the goal it has in it's own mind (ref: one of many of Yudkowsky's writings). This goal, ofcourse, needs to be something along the lines of "recursively improve, be friendly to human beings, help them in any way possible". To a machine like that, building Utopia is real easy. We really do not need to wait decades and decades for........ for what exactly?

The Singularity is commonly defined as a single point in time (and rightly so), not many points spread out over a few decades.

As for 'taking time to build weapons designed by SAI'... that does not make sense. Will an SAI really deliver us the plans for a weapon so we can build it ourselves? Or will the SAI come up with an idea, that outshines our most brilliant minds by at least a billionfold, to build those weapons quick and efficient?

That previous question is actually pretty irrelevant. AI's are not going to bother with the physical world. They'll take up permanent residence in VE.

If you can find the time, dig a little deeper into Yudkowsky's writings. He really knows how to think about matters like these, and he has written extensively on the subject.

Jay


========== Pearson's Reply ==========

Nothing new here, I myself explored all the same arguments you pose back in 1990, so I've been writing about it for 15 years too. Positive technology feedback was one of my first major insights when I started doing this job. I even rely on it in the design of my own OB1 (optical brain 1) machine, whihc I'm designing to achive human level intelligence by 2015, potentially 1 billion x human intelligence by 2020, in a smart yoghurt. But thinking about these things properly, as you put it, it still can't violate the laws of physics so there will be no point singularity, ever. Most of us who talk about the singularity recognise this and realise it will actually just be a period of lightning fast development that takes place over at least a few years. Development will always take some time, even in an age of self-relicating nanoassemblers, direct energy conversion or whatevr wishful thinking you want to use. But the OB1 machine won't arrive overnight, and the military are already well aware of the project type (we've been discussing this with the military for many years), so it isn't going to suddenly arrive out of the blue and will almost certainly be made illegal before we start building it. If it does get built, it will almost certainly arrive first in a weapon system, if only for the reason that that is exactly what I would recommend, since we need to have poweful weapons to defend ourselves against anyone lese who tried to go dwon the same path. Human management takes effect long before it exists, since we all know that afterwards is too late. Mankind isn't that stupid, usually. The terminator scenario is certainly feasible, but only if we cock up the development. Allowing the construction of an AI that can free-run and eliminate mankind isn't a clever thing to do.

VEs are as you say the likely home of AI most of the time, and many people will also live in VEs. And some of us will be hybrid, and some of us will have numerous instances, and some of us will be Borg. All good fun.

Ian


========== My Reply ==========

Ian,

You say development takes time in the physical world. You also say AI's will mostly live in VE.

Since we know there are no limits in VE, this seems like a contradiction to me. The physical world won't matter anymore. Why would an AI bother with clumsy, slow, physical bodies if it can work magic in VE?

Also... who says SAI's can't figure out with a complete and correct model for physics as they actually are? What if physics as we know them today are not correct?

It certainly seems that way, since we're trying real hard to tie Relativity Theory and Quantum Mechanics together with string theory.

In the past, the plug has been pulled an physics models before. Who says it can't happen again?

Creating an AI that wants to take over the world is indeed a very bad idea. This is why it is so important that the AI needs to have a goal in mind. It NEEDS to WANT to be friendly and humane. As Yudkowsky adequatly puts it:"If it doesn't want to be friendly, you've already lost.".

A friendly, humane SAI will, by definition, want to help mankind raise its living standards. Therefore, building it is an exceptionally good idea. It's also quite necessary, since nanotech is coming fast, and humans aren't capable of managing with 100% safety.

When the FHSAI reaches a point where it's rediculously fast, and living billions of years a second, it could easily ask somebody the question:"Do you want to upload and live a really good life with me here in VE?".

If you'd say yes, then I think this would qualify as quite a rapture in time. A point Singularity, if you will.

Jay

========== End Of Conversation ==========


Discuss!

More of Ian Pearson's writings can be found at:

http://www.btinternet.com/~ian.pearson/

AddThis Social Bookmark Button

No comments: