Sunday, July 31, 2005

The Future Of Computers

Computers will disappear.

Well, that is to say... from our view, at least. In truth, computers will be everywhere, but they will be so small, you're going to have to look for them in order to spot one. Computation will be as ubiquitous as oxygen, or electricity. It will just be there, making stuff work. Possibly as early as 2010.

The enabling technology for this scenario is nanotechnology.

I have been following nanotechnology for years now. It has gone down an interesting path of increasing acceptance over the past few years. It is also speeding up exponentially. You can actually see this very clearly if you follow the news closely on sites such as the CRNano Blog.

A decade ago, many doubted the feasibility of 3D molecular computing.

Only five years ago, researchers were struggling to come up with a way to produce nanotubes, which are very basic elements of future nanocircuits.

These days, however, researchers are already making huge steps in finding ways to to build true nanocircuits.

For instance, take a look at Hewlett-Packard's crossbar latch. This crossbar latch is the nanoscale equivalent of the transistor, which is an elementary building block of silicon CPU's. This is an essential step in the right direction, if you want to make the transition from today's silicon CPU's to tomorrow's (much smaller and way easier to cool) nano CPU's.

Hewlett-Packard also has come up with a practical strategy for moving computing to the nanoscale, using its own crossbar architecture. Hewlett-Packard is planning on replacing silicon with nanotechnology. An interesting quote from the article:

According to HP, the crossbar latch architecture is six to ten years from widespread use.


Part of Hewlett-Packard's strategy to make the transition to the nanoscale, is a new way of designing nanoscale circuits. From the article:

"We have invented a completely new way of designing an electronic interconnect for nano-scale circuits using coding theory," said Stan Williams, HP Senior Fellow and director, Quantum Science Research at HP Labs. "By using a cross-bar architecture and adding 50 percent more wires as an 'insurance policy,' we believe it will be possible to fabricate nano-electronic circuits with nearly perfect yields even though the probability of broken components will be high."


Another interesting quote from the article:

Williams also said that HP had created working models at "about a third the size of today's chips."

Silicon chips aren't expected to reach that scale for at least seven years.


Hewlett-Packard is, ofcourse, not the only one making advances like these. For example, physicists at the University of Pennsylvania have recently successfully created a functional electronic circuit at the nanoscale. A truly impressive achievement. One that will probably go a long way.

When reading about advances like these, it becomes clear why futurologists like Ray Kurzweil seem to think that the portable, ubiquitous computing scenario will happen by 2010. His predictions are consistent with Hewlett-Packard's timeline. A quote from Ray Kurzweil:

Let's look at a few trends. A lot of the equipment that IT departments concern themselves with now-routers and servers-will all be gone. There won't be computers on desks. We'll eliminate most of that clutter, certainly by the end of this decade.

Technology will be very mobile; it'll be so small that it'll be virtually invisible. Everybody will be online. Images will be written right to our retinas. We'll have very high-speed bandwidth connections at all times. The computing substrate will be everywhere.


If you're interested in the implications of portable, ubiquitous computing, supposedly at the end of this decade, you'd do good to read Kurzweil's view of the year 2009.

AddThis Social Bookmark Button

Tuesday, July 26, 2005

The Future Health Of Our Brains

If science has anything to say about it, humans won't be suffering from any brain diseases (Alzheimer, Parkinson, schizophrenia, etc.) in the near future.

As I have already explained in my Singularity FAQ for Dummies, science has already clearly demonstrated that it is perfectly capable of understanding how our brains work. In fact, the reverse engineering of our own brains has been going on for quite a while. And in the near future, it's about to hit new heights.

For example, take a look at the article 'Computer Scientists To Copy Brain Of Mammal'. It's obvious that science will learn a great deal from simulating whole brains 'in silico'. This will lead to more effective cures, that are finetuned to how our brains work. Also be sure to take a look at the official site of the Blue Brain Project.

You need a lot of processing power, in order to run simulations like these. But if we extrapolate into the future a few years, we see that processing power will not be a limiting factor.

Simulating whole brains gives us ever more knowledge of them, which in turn leads to better cures. Using the knowledge we already have today, efforts are underway to have our brains repair themselves by activating the stemcells that are present there, but currently inactive. This is done with the use of nanotechnology, which delivers therapeutic genes exactly where we want them. You will be hearing a lot more of nanotechnology, and nanotech-aided cures in particular, in the coming years.

More on future nanotech-cures can be found in an article entitled 'Nanotech Moves Closer To Cure'. Not directly related to brain health, but not exactly excluding it either.


Science will one day, in the not too distant future, be capable of growing new organs for our bodies on demand, through therapeutic cloning. But the brain cannot simply be replaced like any other organ, when it gets ill (for obvious reasons, methinks). The brain is the only organ in our bodies that needs repairs 'in situ'.

Thanks to advances like the ones mentioned above, we can all look forward to much healthier bodies and brains for ourselves, our family, our friends, and our future children.


[update]

Scientists Create Working Brain Cells

AddThis Social Bookmark Button

Sunday, July 24, 2005

Ian Pearson's concept of the Singularity

Not too long ago, Ian Pearson (well known futurologist) claimed that brain uploads would be possible around 2050 for rich people, and 25 years later, in 2075, for people with standard income.

Here's one of the articles that talk about it:

http://edition.cnn.com/2005/TECH/05/23/brain.download/

This claim caused many people to post all sorts of uninformed opinions. Because of a lack of understanding of the technology that humanity will soon posess, posters naturally assumed that Pearson's claim was ridiculous.

I, as well, find Pearson's claim ridiculous. But for another reason. And that is because Pearson's claim is ridiculously conservative. I emailed him about this, and I am now posting my conversation with him. Perhaps it can lead to interesting conversations in the comments-section of this blogpost. Even though Pearson is probably completely wrong in his predictions, it is still stimulating to read about someone else's view on the Singularity.


========== My Email To Pearson ==========

Dear Ian,

I consider myself to be a Singulatarian, and I'm always scouring the net for articles regarding anything transhumanism-related. I have also read many of your futuristic writings. Although they are completely devoid of any argument, I enjoy them anyway, because I am familiar (thanks to Kurzweil) with the 'exponential' line of reasoning behind the future you advocate.

However, I feel that you do not completely comprehend the sheer vastness of the Singularity, aka the creation of superhuman AI.

You state that we may create SAI before 2020, yet you go on to predict that uploading won't be here until 2050 for the rich, and it will be here 20 years later for the common folk.

I have read Yudkowsky's "Staring Into The Singularity", and it has given me an understanding of the vastness of the Singularity. Not that my puny human intellect could ever grasp anything that is out of reach by many orders of magnitude... but it DID give me a realistic sense of what the Singularity implies.

Staring Into The Singularity: http://www.sysopmind.com/singularity.html

To me, it seems more likely that uploading, and every other thing we could ever dream up, will be available a fraction of a nanosecond after the Singularity has hit. Completely for free, ofcourse. Running an entire planet smoothly is a triviality to a mind billions of times faster AND smarter than us.

I am assuming a friendly and humane SAI as I write this. Why would a friendly SAI let us suffer through 40 hour work weeks so we can barely pay our way through life? It does not make sense. It doesn't make sense to think there will still be an economy once the Singularity has hit. That whole concept will have vanished. Nor does it make sense to think the SAI that constitutes the Singularity will adapt itself to our slow physical world. It would be more logical to assume the SAI will create it's own society in VE. And since it is also logical to assume the SAI will be running on a substrate billions of times faster than our subjective experience of time, we can safely say that the Singularity will render our society IRRELEVANT immediately.

Since we are not super intelligent, we cannot think the thoughts that an SAI might think. Therefore, we cannot predict what happens beyond the big S. That is why it's called the Singularity in the first place. Our model of the future breaks down at the technological Singularity, just as our model of physics breaks down at the center of a black hole.

I am looking forward to your feedback on this.

Sincerely,

Jay


========== Pearson's Reply ==========

Hi Jay, I think about the singularity regularly. I don't make all my arguments explicit becaue I only have limited time to write, but of course I don't reach my conclusions out of thin air. There are other limits to the system than just intelligence. We may have vastly superhuman intelligence at our disposal, but there are still basic problems of physics that aren't circumvented by it. Even if we are able to design superior technology, it would still take time to produce it, even if it is done by nanoassemblers etc, since these too would have to be assembled first etc. Human regulation also slows things down tremendously, and I certainlyu would resist free-run of tech beyond human control. A managed tech development will be much slower than a theoreticakl singularity whihc means tht although we would have huge intellectual capability, things will still proceed at human rates, so the singularity will not be instantaneous, but spread out over decades. Sorry if that sounds dull, but if you try to push technology at computer speeds in some sort of James Bond villain scenario, you would simply be wiped off the planet by humans who have perfectly adequate weapon systems long before you get a chance to build the ones designed by any superhuman power. If you do it in a managed way, it will happen, but will take longer.

Ian


========== My Reply ==========

Hi Ian,

Basic problems of physics and human management to limit the intelligence explosion of an SAI?

An SAI will not allow itself to be limited by either.

As you have stated yourself, we are only a few years away from having CPU's faster than the human brain. What happens if computers are doing the research, and their speed keeps on doubling? I'm not even taking into account here that such a machine would not just improve on its own speed, but also the quality of its thought. At first it doubles in a year. Then it does a subjective year of work in half a year. Then in 3 months. Boom.... Singularity. So much for the physics argument. An SAI going at millions of years per second will render our 50.000 year old species irrelevant on the spot. So much for the human management argument.

Pushing technology at computer speeds is exactly what is the whole POINT of the Singularity. What you need is a friendly, humane SAI, that has a very well defined goal. An SAI will NOT do anything OTHER than try to achieve the goal it has in it's own mind (ref: one of many of Yudkowsky's writings). This goal, ofcourse, needs to be something along the lines of "recursively improve, be friendly to human beings, help them in any way possible". To a machine like that, building Utopia is real easy. We really do not need to wait decades and decades for........ for what exactly?

The Singularity is commonly defined as a single point in time (and rightly so), not many points spread out over a few decades.

As for 'taking time to build weapons designed by SAI'... that does not make sense. Will an SAI really deliver us the plans for a weapon so we can build it ourselves? Or will the SAI come up with an idea, that outshines our most brilliant minds by at least a billionfold, to build those weapons quick and efficient?

That previous question is actually pretty irrelevant. AI's are not going to bother with the physical world. They'll take up permanent residence in VE.

If you can find the time, dig a little deeper into Yudkowsky's writings. He really knows how to think about matters like these, and he has written extensively on the subject.

Jay


========== Pearson's Reply ==========

Nothing new here, I myself explored all the same arguments you pose back in 1990, so I've been writing about it for 15 years too. Positive technology feedback was one of my first major insights when I started doing this job. I even rely on it in the design of my own OB1 (optical brain 1) machine, whihc I'm designing to achive human level intelligence by 2015, potentially 1 billion x human intelligence by 2020, in a smart yoghurt. But thinking about these things properly, as you put it, it still can't violate the laws of physics so there will be no point singularity, ever. Most of us who talk about the singularity recognise this and realise it will actually just be a period of lightning fast development that takes place over at least a few years. Development will always take some time, even in an age of self-relicating nanoassemblers, direct energy conversion or whatevr wishful thinking you want to use. But the OB1 machine won't arrive overnight, and the military are already well aware of the project type (we've been discussing this with the military for many years), so it isn't going to suddenly arrive out of the blue and will almost certainly be made illegal before we start building it. If it does get built, it will almost certainly arrive first in a weapon system, if only for the reason that that is exactly what I would recommend, since we need to have poweful weapons to defend ourselves against anyone lese who tried to go dwon the same path. Human management takes effect long before it exists, since we all know that afterwards is too late. Mankind isn't that stupid, usually. The terminator scenario is certainly feasible, but only if we cock up the development. Allowing the construction of an AI that can free-run and eliminate mankind isn't a clever thing to do.

VEs are as you say the likely home of AI most of the time, and many people will also live in VEs. And some of us will be hybrid, and some of us will have numerous instances, and some of us will be Borg. All good fun.

Ian


========== My Reply ==========

Ian,

You say development takes time in the physical world. You also say AI's will mostly live in VE.

Since we know there are no limits in VE, this seems like a contradiction to me. The physical world won't matter anymore. Why would an AI bother with clumsy, slow, physical bodies if it can work magic in VE?

Also... who says SAI's can't figure out with a complete and correct model for physics as they actually are? What if physics as we know them today are not correct?

It certainly seems that way, since we're trying real hard to tie Relativity Theory and Quantum Mechanics together with string theory.

In the past, the plug has been pulled an physics models before. Who says it can't happen again?

Creating an AI that wants to take over the world is indeed a very bad idea. This is why it is so important that the AI needs to have a goal in mind. It NEEDS to WANT to be friendly and humane. As Yudkowsky adequatly puts it:"If it doesn't want to be friendly, you've already lost.".

A friendly, humane SAI will, by definition, want to help mankind raise its living standards. Therefore, building it is an exceptionally good idea. It's also quite necessary, since nanotech is coming fast, and humans aren't capable of managing with 100% safety.

When the FHSAI reaches a point where it's rediculously fast, and living billions of years a second, it could easily ask somebody the question:"Do you want to upload and live a really good life with me here in VE?".

If you'd say yes, then I think this would qualify as quite a rapture in time. A point Singularity, if you will.

Jay

========== End Of Conversation ==========


Discuss!

More of Ian Pearson's writings can be found at:

http://www.btinternet.com/~ian.pearson/

AddThis Social Bookmark Button

Monday, July 18, 2005

Singularity FAQ for Dummies

To start out with, I am now posting a Singularity FAQ for dummies that I have written myself. I've dubbed it version 0.5 for now. I'm counting on lots of additional questions from all of you, which I will include so I can upgrade the version number. I'm sure there are spelling and grammar errors in here, but hey... it's only version 0.5, right?

The reason I wrote this, is because when I'm reading posts on fora or in newsthreads that are related to artificial intelligence, I see lots of uninformed opinions. This is not always the posters' fault. Very often, an interviewee will tell an interviewer "By year xxxx, we will be able to upload our conscioussness to virtual reality.", without giving any explanation on how this might be achieved. Posters then proceed to naturally assume that this is nonsense, and that it will never happen.

Uninformed opinions like that are mostly fueled by ignorance. I felt this is not necessary, since a few simple insights can lead a person to adopt a complete different set of views on a certain subject.

From now on, ignorance on the topic of the Singularity will be a thing of the past, because there's a FAQ available now. ;)

Enjoy. :)


[update]
0.5 Initial Version
0.6 Questions added, answers expanded, spelling corrected.


========================================
Singularity FAQ for Dummies, version 0.6
by Jan-Willem Bats

Last update: July 21st, 2005
========================================


Q. What is the Singularity?

A. The Singularity is defined as the point in time where Superior Artificial Intelligence (SAI) is created. An SAI can, by definition, think thoughts that human intelligence can not. This then, is the point where our model of the future breaks down. We cannot possibly predict what an SAI would come up with at 'the other end' of the Singularity. We are not superintelligent ourselves.

The Singularity (with capital 'S') is a term that was borrowed from the singularity (small 's') at the centre of a black hole, where our (current) model of physics breaks down.

-

Q. Is the Singularity purely a science fictional concept?

A. No. Even though science fiction writers have played with the concept of a technological Singularity, this does not mean the Singularity will always remain in the realms of sci-fi.

The Singularity can be achieved by creating SAI. Many believe that it is possible in real life to do exactly that by creating an AI that can learn general stuff, just like humans can. In addition to learning capabilities, the AI must also be able to improve upon itself through understanding of its own configuration, and the ability to alter itself.

-

Q. How can an AI be created?

A. Creating AI will require two things:

1. Hardware: a substrate for intelligence to run on. The hardware must be sufficiently fast. It will most likely have to be some kind of dynamically configurable neural network, just like our own brains.

2. Software: the actual intelligence itself. The software must consist of algorithms that are truly, generally intelligent (iow: not just simulating intelligence in a very narrow field, such as chess). This is by far the hardest part of creating SAI, since it is such a hard problem to find out what exactly intelligence is.

-

Q. When will we have the hardware required to create an SAI?

A. Assuming that the hardware needs to be capable of at least as much operations per second (OPS) as a human brain: 2020.

If the human brain were to fire all its neurons for one second, it would fire approximately 10^14 times. It is obviously possible to run an intelligence on a machine of this speed: we *are* those machines. Therefore, it is safe to assume that 10^14 OPS is fast enough to run an AI on.

The number of OPS, that our hardware can perform, is accelerating exponentially. It has been doing this for decades, and it is expected to do so for decades more. Exponentially extrapolating hardware speed into the future, shows that we will have $1000 dollar CPU's capable of 10^14 OPS in 2020. Note that some supercomputers today already perform this number of OPS, or more. The PlayStation3 CPU runs at 10^12 OPS, which is 1% of the magical 10^14.

Most likely, just speed won't be enough. As mentioned earlier in this FAQ, the hardware will probably have to be a dynamically configurable neural network in three dimensions. This is a far cry from today's CPU's, that have only one layer of circuitry, and are therefore two-dimensional.

Nanotechnology, however, allows for building CPU's in three dimensions. At the molecular level, cooling hardware is a lot easier. Because of that, it will be possible to stack circuitry layers on top of each other. It will also be possible to build neural networks using this technology.

Sidenote: Extrapolations haven't always shown 2020 as the year where the numbers (brain OPS and CPU OPS) match. In the past, extrapolations showed it would be 2035. A few years later, people noticed CPU efficiency was actually going a tad bit faster than just 'plain' exponential growth, so they corrected the figure to read 2030. This happened a few more times, and current extrapolations show 2020 to be the year.

It is not uncommon for extrapolations to be conservative in retrospect. Consensus predictions have shifted forward (that is: closer to the present) before. In all likelihood, they will continue to do so in the future.

It will probably be before 2020.

-

Q. When will we have the software required to create an SAI?

A. Whenever we figure out how intelligence works by reverse engineering our own brains.

Most people think, or naturally assume, that our own intelligence is beyond us. This is actually a misconception, since we already know that it is not.

The reverse engineering of the human brain is a project that is already well underway. Several regions of the human brain are already thoroughly understood, and their functions have been replicated using algorithms written in standard program code. Algorithms, that have been heavily inspired by how our own brains process visual stimuli, are already used in some videocameras. Speech recognition technologies make use of algorithms that researchers came up with after figuring out how sound processing occurs in the human brain.

Jeff Hawkins, a well respected AI researcher, has stated that, when the workings of a certain brain region are understood, it is entirely possible to describe these workings with mathematical formulas.

Already, hardware is beginning to merge with our brains. There exist people that suffer from diseases, like Parkinson or epilepsy, that have chips in their brains to correct irregular brain functions.

Researchers have also created a CPU to function like a rat's hippocampus. They did this by providing input to a slice of actual rat-hippocampus tissue, and analysing the output. This input/output has been reproduced succesfully in a CPU. This silicon implant performs the same processes of the (damaged) part of the brain that it is replacing.

It is (and has been for quite a while) clear that the understanding of intelligence is not beyond us, contrary to popular belief. So when will be have enough understanding of our intelligence to reproduce this in order to build an AI?

Extrapolations show that it will be possible to see what is going on inside a human brain, in complete detail and in real-time, in 2015. Extrapolations also show that the reverse engineering of the human brain will be complete in 2030.

Naturally, we won't have to completely understand the human brain before we can use our knowledge to build AI. So we will probably build the software well before 2030.

-

Q. Where did you get this 10^14 number?

A. The human brain has 100 billion (100 * 10^09 = 10^11) neurons. Each neuron has on average 1.000 (10^3) connections. These neurons fire at most 200 (2 * 10^2) times per second.

The calculation then yields 10^11 * 10^3 * 10^2 * 2 = 2 * 10^16.

Careful readers have noticed that this calculation yields a number that is two orders of magnitudes greater than 10^14. This is because conservatively high numbers have been used in this calculation.

This implies that it is highly unlikely that the human brain has more than 10^16 OPS, and that it is highly likely that it has actually way *less* than that.

The 10^14 number is what most people consider to be more accurate. Even if this turns out to be the wrong order of magnitude in retrospect, it won't matter so much. It will only shift predictions by a few years.

-

Q. Why do you rely on extrapolations so much?

A. Many people would claim that it is impossible to tell the future. I would claim that these people are dead wrong.

There is one way to tell the future, but it only applies to technological progress. Here's how it works:

When building technology, you are actually building tools that you can use to build the next generation of more advanced tools. This generation of more advanced tools, can then be used to create a generation of tools that are yet more advanced.

This is called a positive feedback loop. Any process that has a positive feedback loop (such as building technology intelligently, like humanity is doing right now), is inherently an exponentially accelerating process. This is the reason why progress just keeps going ever faster, and why our society is changing ever faster.

This exponentially accelerating process has been in effect since the beginning of life on Earth. If you were to cram the complete evolutionary process in one hour, the upstanding homo sapiens (modern man) does not show up until the last millisecond. This is the power of exponential growth.

Why then, are exponential extrapolations so extremely valueable?

It is because these extrapolations have, in the past few decades, proven to be very accurate at predicting our technological future. These extrapolations have, for example, been used to predict when the Internet would become mainstream. For many people, it just seemed to pop out of nowhere in the middle of the nineties. This is not correct. The Internet has been around since the late sixties, and it has been consistently doubling its nodes ever since. If you had known about it (and some people did), you could have plotted this trend on a graph. It would show up as an exponential curve, that hit the sky-rocket part around the mid-nineties.

Extrapolations have also been used to accurately predict when a computer would beat a human being at chess. Computers were consistently improving 45 rating points per year, and Ray Kurzweil was smart enough to notice the trend. He predicted 1998 to be the year. It turned out to be 1997. Not bad.

Extrapolations have been used to accurately predict so many things, such as CPU dimensions, mobile phone usage, Internet growth, broadband increases, robotic intelligence, and a myriad of other things.

There are two reasons why it is a good idea to rely on extrapolations:

The first is because they already *have* showed to be accurate in previous decades. Technological future prediction is a proven concept. You simply look at given data, and extrapolate (usually exponentially) from there on.

The second is because human intuition only works well in predicting what linear processes will look like in the future. Humans are always overestimating what can be done in the short term, and grossly underestimating what can be done in the long term.

Therefore, a human being would do wise to throw his intuition overboard when it comes to predicting our technological future. Extrapolations have factually been way (waaaaaaay) more accurate than our intuitions.

-

Q. Why will hardware, on which to run AI, most likely have to be three-dimensional?

A. Because we are figuring out how intelligence works by reverse engineering existing intelligent machines, that just so happen to exist in three dimensions: our brains.

So when we are learning how intelligence works on three-dimensional configurable neural networks, it is very likely that we will end up creating AI on such a substrate.

-

Q. Is it possible to run AI on two-dimensional hardware?

A. It has been proven that any neural network (also the ones that exist in three dimensions) can be replicated in an algorithm running on simple two-dimensional hardware.

So if we were to map the neural network of our own brains into such an algorithm and ran it on a two-dimensional CPU, would this result in a consciouss, intelligent entity?

Some seem to think so, some seem to think not. But is it really relevant?

If it is obviously possible to build AI using three dimensions (and the mainstream CPU's of the near future are likely to be three-dimensional), why even bother to try and get intelligence to run on two-dimensional platforms?

Why would it be important to be able to run AI on one single two-dimensional CPU?

Besides... one could simple take a whole lot of two-dimensional CPU's and configure them to act like neurons in a neural network. If the CPU's were connected to each other to form a neural network with a configuration that allows for intelligence, intelligence (and mayhap) consciousness would probably arise from the total network. Because of the fact that a chip to replace a rat-hippocampus already exists, I think it's safe to assume that a chip could also replace just one neuron.

-

Q. Do you think comparing 'CPU operations' to 'human brain neuron firing' makes for a good comparison?

A. Absolutely. They are both very basic elements of complex machines: modern microprocessors and human brains respectively.

When a neuron fires, it basically transfers an amount of information, comparable to 5 bits, to another neuron. The receiving neuron views this as input, does some computation, and gives output. And so on.

When a CPU loads in an instruction as input (which is basically just a number), it performs some computations, and then gives output. Just like a neuron. A CPU is, ofcourse, just one CPU. But it is easy to see how a network of CPU's could pass on information to each other just like a network of neurons would.

-

Q. Won't an (S)AI take over the world and enslave humanity, just like in the movies?

A. Movies are optimized for cinematographic coolness. They have nothing to do with reality, and should therefore not be viewed as history books from the future.

When writers write science fiction, they realize that the people who will read and/or view their creations will want to be able to identify with the characters. For this reason, in series like Star Trek and movies like Starwars, you always see unaugmented humans living in an extremely hi-tech world. And more often than not, the writers will have these unaugmented humans battling some evil technological monster (think Terminator). Otherwise, there would not be much of a story.

But an AI would not simply decide to just take over the world and enslave humanity, unless we painstakingly program it to do so. Wanting to take over the world implies a longing for power. A longing for power is a very human trait. Because we all have this trait (thanks again, Mother Nature), some of us automatically project this onto any not-yet-invented AI.

But an AI will do nothing more than try to find a way to complete the goals it has in mind, just like a human being does. But where human beings feel the need to reproduce as much as possible by getting a social status as high as possible, an AI does not necessarily have to posess these desires. It won't unless we build it so.

The total brain configuration space is *huge*. Our human brain configuration is a mere drop in a Universe. It is entirely possible for a completely different type of brain, with a completely different type of consciousness, and a completely different set of goals and desires, to exist.

If we know what's best for us, we will build an SAI that is friendly to human beings. It has to have to *want* to be friendly, or (as Eliezer Yudkowsky of the Singularity Institute so eloquently puts it) we've already lost.

An evil AI will, by definition, be evil towards mankind, and is likely to provide us with our own personal hell for all eternity.

A neutral AI will, by definition, not care about mankind. It will not set out to destroy us, but it will also not mind doing so if this somehow suits its needs.

A friendly AI will, by definition, be friendly towards mankind, and is likely to want to solve our problems, provide us with upgraded bodies and minds, and run our world for us.

The latter is something we are in dire need of, as we obviously can't run our world ourselves. We never could do it very efficiently, and because of our exponentially changing world, we will continue to have more and more problems with it in the future.

-

Q. What happens after the Singularity?

A. Nobody knows, because our model of the future breaks down at the point in time where the Singularity takes place.

An SAI is by definition more intelligent than any human. Not only will it be more intelligent, it will be running on hardware which is quite a lot faster than our own human brains. Example: a cubic inch of nanocircuitry would be a million times faster than one human brain. An SAI running on a cube like that would subjectively experience one million seconds (a good portion of a year) in the timespan that we perceive to be one second. In that time, the SAI can recursively improve upon itself and grow yet ever faster and more intelligent.

This constitutes an intelligence explosion. Intelligence goes off into theoretical infinity from hereon. Basically, you are talking about the creation of (a) God. The amount of intelligence will, from a human viewpoint, not be distinguishable from infinity.

Assuming the SAI will be a friendly one, we can safely assume that it would a triviality for an SAI like that to provide us with everything we might desire. The most radical of which will be an intelligence-upgrade and a conscioussness-expansion for ourselves. After all... why stick to human hardware when you can merge with your own technology and have a much nicer brain than the one you have now?

Think of it, you could get rid of all those pesky emotions such as jealousy, anger, vengefulness. You could replace them with much nicer emotions, of which we currently do not even know that they exist. You could get rid of horrible brain diseases, such as Parkinson, schizophrenia, Alzheimer, and religious insanity, by allowing Singularity-technology to replace your brain with advanced nanotechnology (or picotechnology, or maybe even femtotechnology??) in situ. Many people agree that uploading your conscioussness to a virtual environment will also be possible, thus freeing oneself from the drudgery of the slow, physical world.

But this is only the tip of the iceberg. Nobody knows what it is like to have his/her conscioussness expanded, and his/her intelligence upgraded.

Needless to say that the world would be in for quite a change. But we cannot know what it is like, since nobody has ever gone through it. Most well-read, informed people *do* realize, however, that it is a noble goal worth fighting for.

It must be achieved. It is unethical to *not* fight for it.

There are too many people suffering from all sorts of things in this world. It is not necessary. We can all live rich and meaningful lives, free from pain, agony, and torment, physically as well as mentally.

-

Q. Is creation of SAI the only road to the Singularity?

A. No. One could imagine a scenario where human beings upgrade themselves, gradually merging with their own technology more and more. This way, we would have a human being turned into a superior intelligence.

Some people prefer this strategy to the creation of an AI, because the artificial intelligence just might turn out to be not friendly. This means it would be either evil or neutral. Both are equally unacceptable.

When upgrading a human intelligence, however, we could at least pick one of which we know he's a friendly guy. Friendly superior intelligence would hence be more or less guaranteed.

-

Q. Is it likely that the Singularity will be initiated by friendly SAI?

A. Probably. So far, the forces of evil (terrorists) have always been outfunded by the forces of good (scientists trying to increase John Doe's quality of life). Also, scientists work in a peer reviewed environment, in contrary to terrorists. Terrorists do not have to put their inventions through the FDA.

Technology is itself inherently neutral. It's how it's used that makes all the difference. Some people have claimed that the Internet could never exist, because it would be destroyed by virii. However, virii are software technology. And software technology can also be used to build virusscanners. And indeed, this is exactly what has happened. There is, ofcourse, the occasional virus outbreak, doing several millions of dollars in damage. But overall, we can safely say that the Internet is alive and well thanks to sophisticated virusscanners, which have obviously won from the virii.

The same will likely be true for the creation of AI. It *is* important, however, that friendly SAI is created *before* anything else. Once SAI exists, it will be as good as impossible for anybody to create a friendly SAI to counter an evil/neutral SAI.

-

Q. What's the difference between AI and SAI?

A. AI = Artificial Intelligence. SAI = Superior Artificial Intelligence.

AI can be used to refer to simple 'intelligent' systems such as chess computers, as well as truly, generally intelligent beings that may or may not posess conscioussness.

SAI refers to nothing else than an AI that has an intellect superior to human beings. Such an AI will likely claim to be consciouss, and we will likely believe it because we have no reason to assume that it is not.

There if, ofcourse, no objective way to measure the existence of conscioussness, since it is a subjective experience.

-

Q. What exactly will make an SAI so superior?

A. The idea is that a machine intelligence has the advantages of both humans and machines.

Imagine an AI that is intelligent because it has a neural network that is wired up in a very sophisticated manner, so that intelligence arises from it as it does in our own human brains.

Add to that the fact that the neurons of this AI will run at blazing speeds. The electrical hardware we have today is already millions of times faster than our neurons, which fire at a slow rate of 200 times per second. The future hardware that will be used to build AI, will leave today's hardware to bite the dust. An AI will likely think *zillions* of times faster than any other intelligent entity ever has.

Add to that the fact that machines have perfect memory, and (theoretically) endless amounts of it. Where human beings start to have serious problems memorizing a list of 10 numbers, a machine can easily retrieve gigabytes of data with 100% accuracy, and at rapid speeds.

Add to that the fact that machines can share knowledge. Where every human being needs to go through the same painstaking process of learning how to walk, talk, calculate, and tie his/her shoelaces, machines can simply share there knowledge virtually instantaneously. For instance, you could train a speech recognizing program to get better at its task. This program, and its database of newly acquired knowledge, can then easily be shared with any computer in the world. Any computer running this program can then recognize speech as well as the computer that actually went through the learning process. AI's will be able to make copies of themselves, and turn themselves into groups of intelligent entities, all working together on cracking the next big problem.

Add to that the fact that AI's, once smart enough, will be able to understand their own brains, just like us humans can with our own brains. However, it doesn't stop there. AI's will be built in such a way they can rewire their own brains, therefore recursively improving upon themselves. Us humans are quite limited in changing our own brains. When we want to acquire a new skill, such as programming, playing guitar, etc., we have to exercise at it for years. All this practice usually results in a few minor changes in how our neurons are wired up. All in all, it's a very slow process for us humans. Machines will not have these limitations.

And that's how an AI can easily turn into an SAI.

-

Q. Today's AI's, such as chess computers, are just programs that have nothing to do with with general AI. Therefore, a consciouss, general AI can never exist, can it?

A. That line of reasoning is demonstrably flawed.

This argument is of the form "we don't have X today, therefore we shall never have X". Try going back to the early sixties and claim that man will never walk on the moon. Try going back to 1900 and claim that man will never fly.

We all know how that went, don't we?

The observation that today's 'intelligences' are merely logical algorithms, is actually quite a sharp observation. But it is actually a non-issue. At this moment, the reverse engineering of the human brain is in full progress, and expected to be done around 2030. The human brain has several dozens of regions, of which about a dozen have been fully reverse engineered. The workings of these brain regions have been succesfully reproduced with the help of computer algorithms.

What we have already demonstrated, is that we have the ability to understand our own intelligence. However, we are not yet at the point where we can use that knowledge in order to build generally intelligent machines with sophisticated neural networks.

Everything points to a scenario where consciouss, intelligent, cognitive machines will indeed be built.

And when a machine like that loads a Deep Blue chess program into its knowledge base, thereby combining cognitive powers with CPU powers, hold on to your socks...

-

Q. What can I do to help getting the Singularity off the ground?

A. Eliezer Yudkowsky of the Singularity Institute has compiled a nice list of 10 ways to help the Singularity. It can be found here:

http://www.singinst.org/action/waystohelp.html

-

Q. In the past, people have made other predictions such as flying cars, leisure society, SAI, and they did not come to pass, so neither will these SAI-by-2030 predictions.

A. That's not a question, but I'm going to answer it anyway.

The reason why some researchers started making far out predictions, was because computers showed great potential even right after being invented. In the fifties, computers were already proving mathematical theorems and beating people at chess (just the ones that were not so good, though). Researchers back then had no idea that exponential extrapolation was important if you wanted to accurately predict the technological future. They probably didn't have much data of the previous decades to do it with, anyway.

Exponential growth was still going slowly, and researchers were still using their intuitions to linearly extrapolate into the future. Many of them figured that a computer would beat grandmaster chessplayers by ~1970. Many predicted flying cars, SAI, and the leisure society by 2000.

None of these predictions panned out. Not in the correct timeframe, anyway.

We can draw a valuable lesson from this: when somebody is trying to sell you his predictions, and he does not have the data and the extrapolations to back it up, you should not take this person seriously.

On a sidenote... a chesscomputer eventually *did* beat a grandmaster, only it happened decades later.

Flying cars exist (and they're called SkyCars), they're just not mainstream. The creator of the SkyCar is currently testing with prototypes. He predicts we'll start to see the first flying cars about 15 years from now. However, police, military, and rescueworkers will have it sooner. There is a movieclip on the Internet demonstrating a rescue operation.

We don't have SAI yet, but it's starting to look more and more like we will in only a few decades.

And that leisure society that we were promised so many decades ago, might eventually pan out as well. Think about what happens when robots start to take over more and more jobs, automizing the entire economy. Think about what happens when molecular manufacturing brings us the ability to produce *any* product at the price of a few cents per pound. Will we still need the economy in its current form?

See the links at the bottom of this FAQ for more information on Robotic Nation and Molecular Manufacturing.

-

Q. I read something about uploading oneself to a virtual environment... can conscioussness really be uploaded, and will that really be me?

A. Many people have thought about this. And nobody has a conclusive answer.

If you were to upload your mind in the sense that we upload files, a copy would be made. Would this be you? Most people think not. The real you (that is: your unique stream of conscioussness) will still be in your flesh and blood body. The copied you would share your all your memories and all your knowledge. He would probably *feel* like he was you. Yet he's there in the computer, and you are still sitting outside it.

However, if we were to use advanced nanotechnology and advanced knowledge of brains/intelligence to gradually replace your biological brain with a different type of nanotechnological brain, you will remain yourself all through the process.

This process is not at all that different from what's going on right now: you are gradually being replaced by other molecular particles all the time. Every few years, we consist of a completely different set of molecular particles than we did a few years back. Yet we all agree that we remain ourselves, from decade to decade. Our 'selves' are obviously not tied to matter. Rather, it is the 'pattern of our mind' that changes only gradually over time... slowly enough for us to feel that we remain ourselves through time.

Your shiny new nanotech brain could potentially run its very own virtual environment, not needing anymore input from senses that observe the outside world. It could also have an interface to a network, say the Internet, so you could migrate there and meet other people in there own virtual environments.

So basically... you would still be you, and you'd also be 'uploaded' to a virtual environment.

This all seems very far out, but remember... I'm assuming advanced nanotechnology and advanced knowledge of brains and intelligence. A scenario like this will never happen without these two technologies, anyway.

-

Q. What links can you provide me with that I will probably find interesting to read?

A. Anybody who is new to the whole Singularity scenario, would do well to teach himself the basics by reading Ray Kurzweil's very excellent (and legendary) "The Law Of Accelerating Returns". This essay explains exponential extrapolations in detail. You won't be the same person after reading these insights for the first time.

http://www.kurzweilai.net/articles/art0134.html?printable=1


Another source of valuable information is the Singularity Institute For Artificial Intelligence. A nice list of introduction material can be found at the following link. It talks about friendly AI, seed AI, nanotechnology, general intelligence, the impact of the Singularity, etc.

http://www.singinst.org/intro/


For all of you who wish to delve into the murky depths, there are the writings of one self-taught genius named Eliezer Yudkowsky, who also happens to be the driving force behind the Singularity Institute. He has written extensively on AI and the creation thereof. Among his work are titles such as "Creating Friendly AI" and "General Intelligence and Seed AI". Both of these are heavy material and will likely require multiple reading sessions in order for you to assimilate them.

http://www.yudkowsky.net/beyond.html


An essay by Yudkowsky that is more comprehensive to the layman is "Staring into the Singularity". This document does a very excellent job of explaining what exactly the impact of the Singularity will be (in as far this can be comprehended by mere human intellect ofcourse).

http://www.yudkowsky.net/singularity.html


Nanotechnology can, and probably will, provide high resolution tools that can be used for brainscanning and reverse engineering. Nanotechnology can also be used to build suitable substrates (dynamically configurable neural network CPU's) for AI's to run on. By the way things are looking now, nanotechnology will, by 2015, bring us fullblown molecular manufacturing. This means that we will be able to design any product at the molecular level, and that it will be produced by a NanoFactory at very low cost.

http://www.crnano.org/Bridges.htm

http://crnano.typepad.com


The Singularity scenario is not the only thing to keep in mind when thinking about the impact of AI. The process towards the Singularity, before it hits real fast and real hard, will be one in which machines become gradually smarter each year. Therefore, there are not only implications for the longer term (2030), but also for the short- to mid-term (2015). Marshall Brain has done an excellent job of analyzing what will likely happen in the not too distant future, when machines get smarter.

http://www.marshallbrain.com/robotic-nation.htm

http://roboticnation.blogspot.com/


After reading this FAQ, you will probably want to read, with your very own eyes, something that shows how exactly researchers are treating brains like circuitry, due to our increasing knowledge on brains. Here is a link to an article that talks about the rat hippocampus prosthesis.

http://www.newscientist.com/article.ns?id=dn3488

-

AddThis Social Bookmark Button

My First Blogpost

Well, well, well... my first Blog. :)

After having played with the idea of blogging for a long time, I have finally decided to actually do it.

I've got some insights on technology, and how it will impact our world. And I'm gonna share it with all of you. I hope I will manage to bring some enlightenment to a few people, and that there will be many interesting conversations.

Oh, and lots of stimulating feedback, ofcourse. ;)

Greetz,

Jay

AddThis Social Bookmark Button