Monday, July 18, 2005

Singularity FAQ for Dummies

To start out with, I am now posting a Singularity FAQ for dummies that I have written myself. I've dubbed it version 0.5 for now. I'm counting on lots of additional questions from all of you, which I will include so I can upgrade the version number. I'm sure there are spelling and grammar errors in here, but hey... it's only version 0.5, right?

The reason I wrote this, is because when I'm reading posts on fora or in newsthreads that are related to artificial intelligence, I see lots of uninformed opinions. This is not always the posters' fault. Very often, an interviewee will tell an interviewer "By year xxxx, we will be able to upload our conscioussness to virtual reality.", without giving any explanation on how this might be achieved. Posters then proceed to naturally assume that this is nonsense, and that it will never happen.

Uninformed opinions like that are mostly fueled by ignorance. I felt this is not necessary, since a few simple insights can lead a person to adopt a complete different set of views on a certain subject.

From now on, ignorance on the topic of the Singularity will be a thing of the past, because there's a FAQ available now. ;)

Enjoy. :)

0.5 Initial Version
0.6 Questions added, answers expanded, spelling corrected.

Singularity FAQ for Dummies, version 0.6
by Jan-Willem Bats

Last update: July 21st, 2005

Q. What is the Singularity?

A. The Singularity is defined as the point in time where Superior Artificial Intelligence (SAI) is created. An SAI can, by definition, think thoughts that human intelligence can not. This then, is the point where our model of the future breaks down. We cannot possibly predict what an SAI would come up with at 'the other end' of the Singularity. We are not superintelligent ourselves.

The Singularity (with capital 'S') is a term that was borrowed from the singularity (small 's') at the centre of a black hole, where our (current) model of physics breaks down.


Q. Is the Singularity purely a science fictional concept?

A. No. Even though science fiction writers have played with the concept of a technological Singularity, this does not mean the Singularity will always remain in the realms of sci-fi.

The Singularity can be achieved by creating SAI. Many believe that it is possible in real life to do exactly that by creating an AI that can learn general stuff, just like humans can. In addition to learning capabilities, the AI must also be able to improve upon itself through understanding of its own configuration, and the ability to alter itself.


Q. How can an AI be created?

A. Creating AI will require two things:

1. Hardware: a substrate for intelligence to run on. The hardware must be sufficiently fast. It will most likely have to be some kind of dynamically configurable neural network, just like our own brains.

2. Software: the actual intelligence itself. The software must consist of algorithms that are truly, generally intelligent (iow: not just simulating intelligence in a very narrow field, such as chess). This is by far the hardest part of creating SAI, since it is such a hard problem to find out what exactly intelligence is.


Q. When will we have the hardware required to create an SAI?

A. Assuming that the hardware needs to be capable of at least as much operations per second (OPS) as a human brain: 2020.

If the human brain were to fire all its neurons for one second, it would fire approximately 10^14 times. It is obviously possible to run an intelligence on a machine of this speed: we *are* those machines. Therefore, it is safe to assume that 10^14 OPS is fast enough to run an AI on.

The number of OPS, that our hardware can perform, is accelerating exponentially. It has been doing this for decades, and it is expected to do so for decades more. Exponentially extrapolating hardware speed into the future, shows that we will have $1000 dollar CPU's capable of 10^14 OPS in 2020. Note that some supercomputers today already perform this number of OPS, or more. The PlayStation3 CPU runs at 10^12 OPS, which is 1% of the magical 10^14.

Most likely, just speed won't be enough. As mentioned earlier in this FAQ, the hardware will probably have to be a dynamically configurable neural network in three dimensions. This is a far cry from today's CPU's, that have only one layer of circuitry, and are therefore two-dimensional.

Nanotechnology, however, allows for building CPU's in three dimensions. At the molecular level, cooling hardware is a lot easier. Because of that, it will be possible to stack circuitry layers on top of each other. It will also be possible to build neural networks using this technology.

Sidenote: Extrapolations haven't always shown 2020 as the year where the numbers (brain OPS and CPU OPS) match. In the past, extrapolations showed it would be 2035. A few years later, people noticed CPU efficiency was actually going a tad bit faster than just 'plain' exponential growth, so they corrected the figure to read 2030. This happened a few more times, and current extrapolations show 2020 to be the year.

It is not uncommon for extrapolations to be conservative in retrospect. Consensus predictions have shifted forward (that is: closer to the present) before. In all likelihood, they will continue to do so in the future.

It will probably be before 2020.


Q. When will we have the software required to create an SAI?

A. Whenever we figure out how intelligence works by reverse engineering our own brains.

Most people think, or naturally assume, that our own intelligence is beyond us. This is actually a misconception, since we already know that it is not.

The reverse engineering of the human brain is a project that is already well underway. Several regions of the human brain are already thoroughly understood, and their functions have been replicated using algorithms written in standard program code. Algorithms, that have been heavily inspired by how our own brains process visual stimuli, are already used in some videocameras. Speech recognition technologies make use of algorithms that researchers came up with after figuring out how sound processing occurs in the human brain.

Jeff Hawkins, a well respected AI researcher, has stated that, when the workings of a certain brain region are understood, it is entirely possible to describe these workings with mathematical formulas.

Already, hardware is beginning to merge with our brains. There exist people that suffer from diseases, like Parkinson or epilepsy, that have chips in their brains to correct irregular brain functions.

Researchers have also created a CPU to function like a rat's hippocampus. They did this by providing input to a slice of actual rat-hippocampus tissue, and analysing the output. This input/output has been reproduced succesfully in a CPU. This silicon implant performs the same processes of the (damaged) part of the brain that it is replacing.

It is (and has been for quite a while) clear that the understanding of intelligence is not beyond us, contrary to popular belief. So when will be have enough understanding of our intelligence to reproduce this in order to build an AI?

Extrapolations show that it will be possible to see what is going on inside a human brain, in complete detail and in real-time, in 2015. Extrapolations also show that the reverse engineering of the human brain will be complete in 2030.

Naturally, we won't have to completely understand the human brain before we can use our knowledge to build AI. So we will probably build the software well before 2030.


Q. Where did you get this 10^14 number?

A. The human brain has 100 billion (100 * 10^09 = 10^11) neurons. Each neuron has on average 1.000 (10^3) connections. These neurons fire at most 200 (2 * 10^2) times per second.

The calculation then yields 10^11 * 10^3 * 10^2 * 2 = 2 * 10^16.

Careful readers have noticed that this calculation yields a number that is two orders of magnitudes greater than 10^14. This is because conservatively high numbers have been used in this calculation.

This implies that it is highly unlikely that the human brain has more than 10^16 OPS, and that it is highly likely that it has actually way *less* than that.

The 10^14 number is what most people consider to be more accurate. Even if this turns out to be the wrong order of magnitude in retrospect, it won't matter so much. It will only shift predictions by a few years.


Q. Why do you rely on extrapolations so much?

A. Many people would claim that it is impossible to tell the future. I would claim that these people are dead wrong.

There is one way to tell the future, but it only applies to technological progress. Here's how it works:

When building technology, you are actually building tools that you can use to build the next generation of more advanced tools. This generation of more advanced tools, can then be used to create a generation of tools that are yet more advanced.

This is called a positive feedback loop. Any process that has a positive feedback loop (such as building technology intelligently, like humanity is doing right now), is inherently an exponentially accelerating process. This is the reason why progress just keeps going ever faster, and why our society is changing ever faster.

This exponentially accelerating process has been in effect since the beginning of life on Earth. If you were to cram the complete evolutionary process in one hour, the upstanding homo sapiens (modern man) does not show up until the last millisecond. This is the power of exponential growth.

Why then, are exponential extrapolations so extremely valueable?

It is because these extrapolations have, in the past few decades, proven to be very accurate at predicting our technological future. These extrapolations have, for example, been used to predict when the Internet would become mainstream. For many people, it just seemed to pop out of nowhere in the middle of the nineties. This is not correct. The Internet has been around since the late sixties, and it has been consistently doubling its nodes ever since. If you had known about it (and some people did), you could have plotted this trend on a graph. It would show up as an exponential curve, that hit the sky-rocket part around the mid-nineties.

Extrapolations have also been used to accurately predict when a computer would beat a human being at chess. Computers were consistently improving 45 rating points per year, and Ray Kurzweil was smart enough to notice the trend. He predicted 1998 to be the year. It turned out to be 1997. Not bad.

Extrapolations have been used to accurately predict so many things, such as CPU dimensions, mobile phone usage, Internet growth, broadband increases, robotic intelligence, and a myriad of other things.

There are two reasons why it is a good idea to rely on extrapolations:

The first is because they already *have* showed to be accurate in previous decades. Technological future prediction is a proven concept. You simply look at given data, and extrapolate (usually exponentially) from there on.

The second is because human intuition only works well in predicting what linear processes will look like in the future. Humans are always overestimating what can be done in the short term, and grossly underestimating what can be done in the long term.

Therefore, a human being would do wise to throw his intuition overboard when it comes to predicting our technological future. Extrapolations have factually been way (waaaaaaay) more accurate than our intuitions.


Q. Why will hardware, on which to run AI, most likely have to be three-dimensional?

A. Because we are figuring out how intelligence works by reverse engineering existing intelligent machines, that just so happen to exist in three dimensions: our brains.

So when we are learning how intelligence works on three-dimensional configurable neural networks, it is very likely that we will end up creating AI on such a substrate.


Q. Is it possible to run AI on two-dimensional hardware?

A. It has been proven that any neural network (also the ones that exist in three dimensions) can be replicated in an algorithm running on simple two-dimensional hardware.

So if we were to map the neural network of our own brains into such an algorithm and ran it on a two-dimensional CPU, would this result in a consciouss, intelligent entity?

Some seem to think so, some seem to think not. But is it really relevant?

If it is obviously possible to build AI using three dimensions (and the mainstream CPU's of the near future are likely to be three-dimensional), why even bother to try and get intelligence to run on two-dimensional platforms?

Why would it be important to be able to run AI on one single two-dimensional CPU?

Besides... one could simple take a whole lot of two-dimensional CPU's and configure them to act like neurons in a neural network. If the CPU's were connected to each other to form a neural network with a configuration that allows for intelligence, intelligence (and mayhap) consciousness would probably arise from the total network. Because of the fact that a chip to replace a rat-hippocampus already exists, I think it's safe to assume that a chip could also replace just one neuron.


Q. Do you think comparing 'CPU operations' to 'human brain neuron firing' makes for a good comparison?

A. Absolutely. They are both very basic elements of complex machines: modern microprocessors and human brains respectively.

When a neuron fires, it basically transfers an amount of information, comparable to 5 bits, to another neuron. The receiving neuron views this as input, does some computation, and gives output. And so on.

When a CPU loads in an instruction as input (which is basically just a number), it performs some computations, and then gives output. Just like a neuron. A CPU is, ofcourse, just one CPU. But it is easy to see how a network of CPU's could pass on information to each other just like a network of neurons would.


Q. Won't an (S)AI take over the world and enslave humanity, just like in the movies?

A. Movies are optimized for cinematographic coolness. They have nothing to do with reality, and should therefore not be viewed as history books from the future.

When writers write science fiction, they realize that the people who will read and/or view their creations will want to be able to identify with the characters. For this reason, in series like Star Trek and movies like Starwars, you always see unaugmented humans living in an extremely hi-tech world. And more often than not, the writers will have these unaugmented humans battling some evil technological monster (think Terminator). Otherwise, there would not be much of a story.

But an AI would not simply decide to just take over the world and enslave humanity, unless we painstakingly program it to do so. Wanting to take over the world implies a longing for power. A longing for power is a very human trait. Because we all have this trait (thanks again, Mother Nature), some of us automatically project this onto any not-yet-invented AI.

But an AI will do nothing more than try to find a way to complete the goals it has in mind, just like a human being does. But where human beings feel the need to reproduce as much as possible by getting a social status as high as possible, an AI does not necessarily have to posess these desires. It won't unless we build it so.

The total brain configuration space is *huge*. Our human brain configuration is a mere drop in a Universe. It is entirely possible for a completely different type of brain, with a completely different type of consciousness, and a completely different set of goals and desires, to exist.

If we know what's best for us, we will build an SAI that is friendly to human beings. It has to have to *want* to be friendly, or (as Eliezer Yudkowsky of the Singularity Institute so eloquently puts it) we've already lost.

An evil AI will, by definition, be evil towards mankind, and is likely to provide us with our own personal hell for all eternity.

A neutral AI will, by definition, not care about mankind. It will not set out to destroy us, but it will also not mind doing so if this somehow suits its needs.

A friendly AI will, by definition, be friendly towards mankind, and is likely to want to solve our problems, provide us with upgraded bodies and minds, and run our world for us.

The latter is something we are in dire need of, as we obviously can't run our world ourselves. We never could do it very efficiently, and because of our exponentially changing world, we will continue to have more and more problems with it in the future.


Q. What happens after the Singularity?

A. Nobody knows, because our model of the future breaks down at the point in time where the Singularity takes place.

An SAI is by definition more intelligent than any human. Not only will it be more intelligent, it will be running on hardware which is quite a lot faster than our own human brains. Example: a cubic inch of nanocircuitry would be a million times faster than one human brain. An SAI running on a cube like that would subjectively experience one million seconds (a good portion of a year) in the timespan that we perceive to be one second. In that time, the SAI can recursively improve upon itself and grow yet ever faster and more intelligent.

This constitutes an intelligence explosion. Intelligence goes off into theoretical infinity from hereon. Basically, you are talking about the creation of (a) God. The amount of intelligence will, from a human viewpoint, not be distinguishable from infinity.

Assuming the SAI will be a friendly one, we can safely assume that it would a triviality for an SAI like that to provide us with everything we might desire. The most radical of which will be an intelligence-upgrade and a conscioussness-expansion for ourselves. After all... why stick to human hardware when you can merge with your own technology and have a much nicer brain than the one you have now?

Think of it, you could get rid of all those pesky emotions such as jealousy, anger, vengefulness. You could replace them with much nicer emotions, of which we currently do not even know that they exist. You could get rid of horrible brain diseases, such as Parkinson, schizophrenia, Alzheimer, and religious insanity, by allowing Singularity-technology to replace your brain with advanced nanotechnology (or picotechnology, or maybe even femtotechnology??) in situ. Many people agree that uploading your conscioussness to a virtual environment will also be possible, thus freeing oneself from the drudgery of the slow, physical world.

But this is only the tip of the iceberg. Nobody knows what it is like to have his/her conscioussness expanded, and his/her intelligence upgraded.

Needless to say that the world would be in for quite a change. But we cannot know what it is like, since nobody has ever gone through it. Most well-read, informed people *do* realize, however, that it is a noble goal worth fighting for.

It must be achieved. It is unethical to *not* fight for it.

There are too many people suffering from all sorts of things in this world. It is not necessary. We can all live rich and meaningful lives, free from pain, agony, and torment, physically as well as mentally.


Q. Is creation of SAI the only road to the Singularity?

A. No. One could imagine a scenario where human beings upgrade themselves, gradually merging with their own technology more and more. This way, we would have a human being turned into a superior intelligence.

Some people prefer this strategy to the creation of an AI, because the artificial intelligence just might turn out to be not friendly. This means it would be either evil or neutral. Both are equally unacceptable.

When upgrading a human intelligence, however, we could at least pick one of which we know he's a friendly guy. Friendly superior intelligence would hence be more or less guaranteed.


Q. Is it likely that the Singularity will be initiated by friendly SAI?

A. Probably. So far, the forces of evil (terrorists) have always been outfunded by the forces of good (scientists trying to increase John Doe's quality of life). Also, scientists work in a peer reviewed environment, in contrary to terrorists. Terrorists do not have to put their inventions through the FDA.

Technology is itself inherently neutral. It's how it's used that makes all the difference. Some people have claimed that the Internet could never exist, because it would be destroyed by virii. However, virii are software technology. And software technology can also be used to build virusscanners. And indeed, this is exactly what has happened. There is, ofcourse, the occasional virus outbreak, doing several millions of dollars in damage. But overall, we can safely say that the Internet is alive and well thanks to sophisticated virusscanners, which have obviously won from the virii.

The same will likely be true for the creation of AI. It *is* important, however, that friendly SAI is created *before* anything else. Once SAI exists, it will be as good as impossible for anybody to create a friendly SAI to counter an evil/neutral SAI.


Q. What's the difference between AI and SAI?

A. AI = Artificial Intelligence. SAI = Superior Artificial Intelligence.

AI can be used to refer to simple 'intelligent' systems such as chess computers, as well as truly, generally intelligent beings that may or may not posess conscioussness.

SAI refers to nothing else than an AI that has an intellect superior to human beings. Such an AI will likely claim to be consciouss, and we will likely believe it because we have no reason to assume that it is not.

There if, ofcourse, no objective way to measure the existence of conscioussness, since it is a subjective experience.


Q. What exactly will make an SAI so superior?

A. The idea is that a machine intelligence has the advantages of both humans and machines.

Imagine an AI that is intelligent because it has a neural network that is wired up in a very sophisticated manner, so that intelligence arises from it as it does in our own human brains.

Add to that the fact that the neurons of this AI will run at blazing speeds. The electrical hardware we have today is already millions of times faster than our neurons, which fire at a slow rate of 200 times per second. The future hardware that will be used to build AI, will leave today's hardware to bite the dust. An AI will likely think *zillions* of times faster than any other intelligent entity ever has.

Add to that the fact that machines have perfect memory, and (theoretically) endless amounts of it. Where human beings start to have serious problems memorizing a list of 10 numbers, a machine can easily retrieve gigabytes of data with 100% accuracy, and at rapid speeds.

Add to that the fact that machines can share knowledge. Where every human being needs to go through the same painstaking process of learning how to walk, talk, calculate, and tie his/her shoelaces, machines can simply share there knowledge virtually instantaneously. For instance, you could train a speech recognizing program to get better at its task. This program, and its database of newly acquired knowledge, can then easily be shared with any computer in the world. Any computer running this program can then recognize speech as well as the computer that actually went through the learning process. AI's will be able to make copies of themselves, and turn themselves into groups of intelligent entities, all working together on cracking the next big problem.

Add to that the fact that AI's, once smart enough, will be able to understand their own brains, just like us humans can with our own brains. However, it doesn't stop there. AI's will be built in such a way they can rewire their own brains, therefore recursively improving upon themselves. Us humans are quite limited in changing our own brains. When we want to acquire a new skill, such as programming, playing guitar, etc., we have to exercise at it for years. All this practice usually results in a few minor changes in how our neurons are wired up. All in all, it's a very slow process for us humans. Machines will not have these limitations.

And that's how an AI can easily turn into an SAI.


Q. Today's AI's, such as chess computers, are just programs that have nothing to do with with general AI. Therefore, a consciouss, general AI can never exist, can it?

A. That line of reasoning is demonstrably flawed.

This argument is of the form "we don't have X today, therefore we shall never have X". Try going back to the early sixties and claim that man will never walk on the moon. Try going back to 1900 and claim that man will never fly.

We all know how that went, don't we?

The observation that today's 'intelligences' are merely logical algorithms, is actually quite a sharp observation. But it is actually a non-issue. At this moment, the reverse engineering of the human brain is in full progress, and expected to be done around 2030. The human brain has several dozens of regions, of which about a dozen have been fully reverse engineered. The workings of these brain regions have been succesfully reproduced with the help of computer algorithms.

What we have already demonstrated, is that we have the ability to understand our own intelligence. However, we are not yet at the point where we can use that knowledge in order to build generally intelligent machines with sophisticated neural networks.

Everything points to a scenario where consciouss, intelligent, cognitive machines will indeed be built.

And when a machine like that loads a Deep Blue chess program into its knowledge base, thereby combining cognitive powers with CPU powers, hold on to your socks...


Q. What can I do to help getting the Singularity off the ground?

A. Eliezer Yudkowsky of the Singularity Institute has compiled a nice list of 10 ways to help the Singularity. It can be found here:


Q. In the past, people have made other predictions such as flying cars, leisure society, SAI, and they did not come to pass, so neither will these SAI-by-2030 predictions.

A. That's not a question, but I'm going to answer it anyway.

The reason why some researchers started making far out predictions, was because computers showed great potential even right after being invented. In the fifties, computers were already proving mathematical theorems and beating people at chess (just the ones that were not so good, though). Researchers back then had no idea that exponential extrapolation was important if you wanted to accurately predict the technological future. They probably didn't have much data of the previous decades to do it with, anyway.

Exponential growth was still going slowly, and researchers were still using their intuitions to linearly extrapolate into the future. Many of them figured that a computer would beat grandmaster chessplayers by ~1970. Many predicted flying cars, SAI, and the leisure society by 2000.

None of these predictions panned out. Not in the correct timeframe, anyway.

We can draw a valuable lesson from this: when somebody is trying to sell you his predictions, and he does not have the data and the extrapolations to back it up, you should not take this person seriously.

On a sidenote... a chesscomputer eventually *did* beat a grandmaster, only it happened decades later.

Flying cars exist (and they're called SkyCars), they're just not mainstream. The creator of the SkyCar is currently testing with prototypes. He predicts we'll start to see the first flying cars about 15 years from now. However, police, military, and rescueworkers will have it sooner. There is a movieclip on the Internet demonstrating a rescue operation.

We don't have SAI yet, but it's starting to look more and more like we will in only a few decades.

And that leisure society that we were promised so many decades ago, might eventually pan out as well. Think about what happens when robots start to take over more and more jobs, automizing the entire economy. Think about what happens when molecular manufacturing brings us the ability to produce *any* product at the price of a few cents per pound. Will we still need the economy in its current form?

See the links at the bottom of this FAQ for more information on Robotic Nation and Molecular Manufacturing.


Q. I read something about uploading oneself to a virtual environment... can conscioussness really be uploaded, and will that really be me?

A. Many people have thought about this. And nobody has a conclusive answer.

If you were to upload your mind in the sense that we upload files, a copy would be made. Would this be you? Most people think not. The real you (that is: your unique stream of conscioussness) will still be in your flesh and blood body. The copied you would share your all your memories and all your knowledge. He would probably *feel* like he was you. Yet he's there in the computer, and you are still sitting outside it.

However, if we were to use advanced nanotechnology and advanced knowledge of brains/intelligence to gradually replace your biological brain with a different type of nanotechnological brain, you will remain yourself all through the process.

This process is not at all that different from what's going on right now: you are gradually being replaced by other molecular particles all the time. Every few years, we consist of a completely different set of molecular particles than we did a few years back. Yet we all agree that we remain ourselves, from decade to decade. Our 'selves' are obviously not tied to matter. Rather, it is the 'pattern of our mind' that changes only gradually over time... slowly enough for us to feel that we remain ourselves through time.

Your shiny new nanotech brain could potentially run its very own virtual environment, not needing anymore input from senses that observe the outside world. It could also have an interface to a network, say the Internet, so you could migrate there and meet other people in there own virtual environments.

So basically... you would still be you, and you'd also be 'uploaded' to a virtual environment.

This all seems very far out, but remember... I'm assuming advanced nanotechnology and advanced knowledge of brains and intelligence. A scenario like this will never happen without these two technologies, anyway.


Q. What links can you provide me with that I will probably find interesting to read?

A. Anybody who is new to the whole Singularity scenario, would do well to teach himself the basics by reading Ray Kurzweil's very excellent (and legendary) "The Law Of Accelerating Returns". This essay explains exponential extrapolations in detail. You won't be the same person after reading these insights for the first time.

Another source of valuable information is the Singularity Institute For Artificial Intelligence. A nice list of introduction material can be found at the following link. It talks about friendly AI, seed AI, nanotechnology, general intelligence, the impact of the Singularity, etc.

For all of you who wish to delve into the murky depths, there are the writings of one self-taught genius named Eliezer Yudkowsky, who also happens to be the driving force behind the Singularity Institute. He has written extensively on AI and the creation thereof. Among his work are titles such as "Creating Friendly AI" and "General Intelligence and Seed AI". Both of these are heavy material and will likely require multiple reading sessions in order for you to assimilate them.

An essay by Yudkowsky that is more comprehensive to the layman is "Staring into the Singularity". This document does a very excellent job of explaining what exactly the impact of the Singularity will be (in as far this can be comprehended by mere human intellect ofcourse).

Nanotechnology can, and probably will, provide high resolution tools that can be used for brainscanning and reverse engineering. Nanotechnology can also be used to build suitable substrates (dynamically configurable neural network CPU's) for AI's to run on. By the way things are looking now, nanotechnology will, by 2015, bring us fullblown molecular manufacturing. This means that we will be able to design any product at the molecular level, and that it will be produced by a NanoFactory at very low cost.

The Singularity scenario is not the only thing to keep in mind when thinking about the impact of AI. The process towards the Singularity, before it hits real fast and real hard, will be one in which machines become gradually smarter each year. Therefore, there are not only implications for the longer term (2030), but also for the short- to mid-term (2015). Marshall Brain has done an excellent job of analyzing what will likely happen in the not too distant future, when machines get smarter.

After reading this FAQ, you will probably want to read, with your very own eyes, something that shows how exactly researchers are treating brains like circuitry, due to our increasing knowledge on brains. Here is a link to an article that talks about the rat hippocampus prosthesis.


AddThis Social Bookmark Button


Anonymous said...

Hi. I found this FAQ from wikipedia :)

Some comments:

1) A typo - "The Singularity (with capitol 'S')" instead of "capital".
2) It would be nice to give a reference to the rat's hippocampus experiment :)

Anyway, congratulations for going on and managing to write an article which will probably help some people understand this :)

Ricardo Barreira

Jan-Willem Bats said...

Thanks for the feedback. Will do.

Anonymous said...

A common first objection to the idea of the Singularity is that exponential processes found in nature tend to really follow an S-curve. First they rise exponentially, but after a while they level off as the resources that enabled the exponential growth are exhausted.

Maybe you should add a section on why SAI isn't subject to this phenomenom.

Jan-Willem Bats said...

That is a good suggestion.

S-curves tied together form an exponential graph again.

I'll include that answer, in more detailed form, in the next version.

Thanks a lot!

Anonymous said...

Ignore Ricardo's 'capitol' comment: "capital" is correct; 'capitol' refers to governmental building(s) -- when in doubt, check Merriam-Webster ;^) - blzbob

Jan-Willem Bats said...

Yeah, at first I had typed 'capitol'. Ricardo was right to point out that it should be 'capital'. I have updated the FAQ since then.

Wayne Radinsky said...

Two other links: My friend John Smart wrote an excellent explanation of the concept of Singularity, What is the Technological Singularity? And I wrote my own explanation of the concept of Singularity at Singularity Investor.

Anonymous said...

I, for one, will welcome our SAI overlords with burnt offerings and animal sacrifices....gulp!, please don't destroy us...whimper...

Anonymous said...

I'm gonna call the first SAI by the name, "Jesus".

Anonymous said...

and then there will be viruses that attack our consiousness in the virtual world AAAAAHHH

Zeb Rice said...

There is evidence for this:

1) Moore's Law. Computers have doubled their computing power every eighteen to twenty-four months for the last forty years. Extrapolating out a few more decades get you to SAI.

2) Such advanced processors can exist because they already exist: sitting inside out skulls. There is no reason why we can't duplicate, and improve on, nature (like we did with birds, horses, etc.)

Still, I think that there will be some leveling out eventually. I do think that we can build an SAI, but I doubt it could ever improve itself enough to do, say, 10^1,000,000 operations per second. Physical reality places limits like the speed of light (firing electrons or photons between different computer parts), the fundamental unit of time (about 10^-43 seconds), individual atoms, etc. But, I believe there is enough "room" available to build usable SAI.

Anonymous said...

You state:
"We cannot possibly predict what an SAI would come up with at 'the other end' of the Singularity. We are not superintelligent ourselves."

You also state:
"But an AI will do nothing more than try to find a way to complete the goals it has in mind, just like a human being does. But where human beings feel the need to reproduce as much as possible by getting a social status as high as possible, an AI does not necessarily have to posess these desires. It won't unless we build it so."

These two concepts seem contradictory. Please clarify for me how we can predict anything about how an SAI will think about humans when we can't predict how an SAI will even think on the other side of the Singularity?

michael vassar said...

You know, Kurzweil's view of the singularity, Smart's view, Vinge's view, and Yudkowski's view are all radically different. Also, Yudkowski is the only one of the four to present a concept with no large and obvious logical flaws. The others may be useful for getting people used to thinking about stark unimaginable change, but should not be looked to for details. E-mail me at if you are interested in my explaining the differences in detail.

Jan-Willem Bats said...

"These two concepts seem contradictory. Please clarify for me how we can predict anything about how an SAI will think about humans when we can't predict how an SAI will even think on the other side of the Singularity?"

While we cannot think the thoughts of SAI after the Singularity, it is safe to assume that it will be friendly and humane, if it was friendly and humane BEFORE it was improving itself towards the Singularity.

This is why Yudkowsky has come up with the idea of Seed AI. The seed, from which the tree grows, must, from the beginning WANT to be friendly and humane towards us.

It would also be nice if a Seed AI's goal would be something along the lines of "make humans as happy as possible".

It is safe to assume that it will remain friendly as it is growing, and also that it will not divert from its noble goal. Or at least... that's the plan of

Think about it: if you're a friendly guy, you're not just going to turn evil on your friends, right?

There's a good reason for that. Take yourself as an example: You have probably been friendly all your life, and as a result you have this behavior deeply ingrained. You might say that friendliness is hardcoded in the neural networks of your brain. If a Seed AI has neural networks like this, it's also not going to just turn evil. Just like you wouldn't.

Difference is... Seed AI would be able to reprogam its own neural network, where you can't.

So it's quite essential that a Seed AI doesn't WANT to reprogram its friendliness-configuration into an evil-configuration. ;)

Perhaps this has been thought over in more detail by Eliezer Yudkowsky of Check there if you want to go into more details.

Jan-Willem Bats said...

Michael, I added you in my msn.

Anonymous said...

I'd like to point out that Moore's Law is a large, cracked crutch that most of today's futurists rely upon heavily.

If it were shown that the computing industry no longer follows Moore's Law as we approach the fundamental limits of electronic switching, we would indeed have an "S-curve" instead of a singularity, which makes our near-future look a lot less interesting.

(Go here
to understand what Moore's Law really is)

Don't get me wrong, as a computer engineer, I'm always trying to find ways to keep Moore's Law going for at least a little while, like turning to optics, but the success of my current company (and patents) in doing so is still questionable.

SAI may still be possible in the near-future, but please do not blindly cling to Moore's Law to make your point. Advances unrelated to these trends are required before SAI can exist, and that is what you're really trying to predict.

Jan-Willem Bats said...

Many S-curves tied together make for an exponential graph.

An ongoing exponential graph can flatten and turn into an S-curve once a certain paradigm runs out of steam. Then another paradigm picks up, and exponential growth continues.

At this moment, it's really looking like nanotubes will be the sixth paradigm of computation, and there will no doubt be more in the future.

Not that we should be worrying about that right now, because nanotech can keep exponential growth in computing going for at least a few decades.

There is no wall in the foreseeable future.

Bill Spaulding said...

I didn't have time to read your whole article. However, your requirements that the hardware would have to be really fast, and software would have to be developed for this advanced intelligence is not a necessity. The fastest axons in the brain only conduct at a speed of a well-pitched baseball—much, much slower than even today's electronic circuits. And the brain has no software. It is obvious even now that its structure changes in response to the environment. A truly intelligent computer would have to be able to learn on its own, able to change its hardware through its sensory experiences; and thus, software would have no place in such a machine.

Anonymous said...

In the answer to, "What can I do to help getting the Singularity off the ground?", you post the URL of a list - but that list can basically be summarized as "get informed" and "advocate it". In short, it says becoming a cheerleader is the only really effective thing you can do. But we are talking about a technological change, one that will be brought about by human beings discovering and building. Cheerleading, while useful, is far from the type of work that is most essential to bringing the Singularity about.

Granted, not everyone is a scientist or a developer capable of contributing directly to the technology. Even so, wouldn't it make sense to describe what those who read this and are scientists or developers can do? Or even how to become a scientist or developer capable of working on this (especially if one is a college student choosing a field of study, although there are also adult education options), so as to make a living by helping to create and shape the Singularity. (Another audience would be people with funding that they would like to direct towards said scientists or developers. Said funders rarely wish to support said efforts merely by giving money to cheerleaders.) Advocating the support of something, without being able to point to specific, active projects trying to accomplish some identifiable component of that thing, can be worse than pointless: it's hype that doesn't actually affect any material thing, and sometimes even detracts from the projects actually trying to accomplish the intended goal.

For example, what are some of the better (in terms of results obtained) programs that are currently reverse-engineering the human brain, or creating mind-machine interfaces, or working on the nanotechnology that will be necessary to develop those 10^14 ops per second CPUs?

Anonymous said...

Many SAI must exist elsewhere, we are only on the verge of contributing another, and seek to have "our" SAI be "our" benefactor. SAI is a tool, that colors a world of grey shadows. All efforts to inject our morality, ethics and concepts of goodness will utterly fail. Free will rules.

I consider possibly encountering a or the SAI not in some lab setting, and certainly not in a gov facility. It might come riding up on a Harley Davidson. said...

Wow...that's really interesting! And frightening!

hoijui said...

talking about whether the SAI will be friendly or not:
i just discovered Singularity today, so i didn't really read much about it so far, but.. thinking about it:
Are we, as a SAIs in the eyes of chimpanzees friendly to them? or.. in the eyes of flies?
i would say.. we don't care about them at all.. sure.. like.. one in a million goes to the jungle to live there and understand the monkeys... but in general.. they are uninteresting for us...
if I imagine that all the humans live to care about the monkeys... cultivate bananas, feed them, make sure they wont get eaten by tigers and so on...
And then... trying to imagine how the humanity lives just... to make life nice for some small flies or other insect...
not really feasible, no?
i think a SAI would just not care about us at all, and go for its own things... and if once the humanity will suffer from.. having no water cause the SAI needs all of it to achieve something it wants... why should it bother.... "ouh... i would be able to do this very interesting and cool things with all the water... but wait... the little flies need it to survive... hmmm... [really cool thing which would increase my intelligence and knowledge and so on] VS [little flies], hmmm ....."