AI has its pessimists and optimists. The one thing they agree on is that the singularity is just around the corner. It’s not.

AI optimists and pessimists both agree that the singularity is just around the corner. Both think it will transform society. But practitioners think not much of anything will happen.

Last year I read Ray Kurzweil’s book How to Create a Mind and it made quite an impression on me. This lead me to his other book, The Singularity is Near, which offered an extremely optimistic vision of the future of AI.

I’ve always had respect for Kurzweil because unlike other AI commentators, he has experience in the field. He is one of the few extreme AI optimists, balancing out the extremism of AI pessimists.

Like many critics of AI, Kurzweil believes in technological singularity and thinks its just around the corner (2045!). Singularity states that when technology advances to a certain level, it will be able to enter into a recurrent cycle of self-improvement. This would lead to an intelligence explosion dwarfing all combined human intelligence leading to a highly unstable era. In mathematics, the name singularity refers to a point which a given mathematical object is not defined (e.g. f(x) = 1/x at x = 0). Proponents of technological singularity believe that AI algorithms will eventually reach this point and transform society.

Most artificial intelligence today is trained through a simple reward mechanism. The intelligence is given an input, some architecture and is told to maximize or minimize some function based on what the correct answer is. A few examples:

  1. English sentence → ■ → Spanish sentence
  2. Sound wave → ■ → Words
  3. Pixels → ■ →hot dog or not hot dog

What goes on in the box is optimized to increase the likelihood of giving the correct answer, which is provided. There is also unsupervised learning where an answer is not provided, instead the goal is usually to find some structure in the information provided. But the distinction is only superficial as the architecture is still serving to maximize/minimize a kind of utility function.

What the pessimists think will happen

Pessimists think that this system will inevitably lead to disaster. Here is Nick Bostrom from Superintelligence on the risk of overly powerful AI systems:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Basically what he’s saying is that black box would naively do just about anything to maximize its function. He uses a more concrete example of an artificial intelligence whose purpose is to maximize paperclip production:

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform “first all of earth and then increasing portions of space into paperclip manufacturing facilities”.

I’m not sure how practical it is to hand over all of universe’s collective matter to an algorithm. But surely if there was something really smart, Bostrom believes it could figure out a way to kill us all and it would not hesitate to do so.

What the optimists think will happen

Sometimes two people can look at the same facts and come up with completely opposite interpretations. Kurzweil and other AI optimists believe that the singularity will bring unheralded wealth to human society. He estimates that a machine will pass the Turing Test by 2029. Because if a machine can trick a human into thinking it’s real in a conversation, it’s now somehow human. And by 2045, we’ll be full singularity:

The pace of change will be so astonishingly quick that we won’t be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating

As for its effect on humans:

We humans are going to start linking with each other and become a metaconnection we will all be connected and all be omnipresent, plugged into this global network that is connected to billions of people, and filled with data.

I for one welcome our new insect overlords

Not only that, but we will live forever. Never mind contradictory information that comes out regularly whether a food is good or bad for you. Kurzweil thinks we’re just around the corner to not only understanding human health, but solving it once and for all. In fact, Kurzweil predicted in 2005 that within five to ten years, we’ll have a pill that will allow people to eat whatever they want without gaining weight. How are we doing on that? To paraphrase Peter Thiel:

We wanted pills that let us eat whatever we want, instead we got rubber bands to tie around our stomachs.

Breakthrough scientific achievements always tend to be 10 to 15 years away. Any longer, the scientists wouldn’t be able to receive funding. Any shorter, the scientists would actually have to deliver results.

What will actually happen

Probably not much. A 2016 survey was conducted on leading researchers in AI. One of the questions asked “When do you think we will achieve Superintelligence”. The results are paint a different picture:

The comments provided some insight as to the thoughts of the researchers:

“Way, way, way more than 25 years. Centuries most likely. But not never.”

“We’re competing with millions of years’ evolution of the human brain. We can write single-purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions.”

“Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.”

Rodney Brooks, former director of MIT Computer Science and Artificial Intelligence Laboratory had similarly harsh words for Bolstrom:

It’s not just AI. He’s not particularly more expert on AI than he is on search for extraterrestrial life. But that’s what he does. That’s his schtick. So, as for the others — and including Nick — none of these people who worry about this have ever done any work in AI itself. They’ve been outside. Same is true of Max Tegmark; it was true of Lord Martin Rees …They are missing how hard it is.

He goes on to say that people tend to anthropomorphize what computers do. So when you see a neural network detect a human face, you think it knows who I am! But it doesn’t know you. It doesn’t know anything. It’s not even a thing. It’s a series of numbers that represent mathematical transformations on numbers representing pixels. The leap from that to end of humanity is so absurd it’s almost not worth discussing.

Of course, the cynical view is that the researchers would just prefer to down play the impacts of their technology as not to attract a heavy regulatory hand. But this argument strikes me as conspiratorial. As though all AI researchers become indoctrinated into a trans-humanist society that throws aside the concerns for humanity immediately upon joining. The self-ascribed trans-humanists are the biggest alarmists.

Why does anyone believe that Singularity is near?

The main culprit is Moore’s Law. Moore’s law states that the number of transistors in an integrated circuit doubles about every two years. It has held steady since the 60’s although many forecasters, including Gordon Moore, expect the law to end by around 2025.

The proponents of a singularity look at crude measures of what the brain is doing and translate that to mathematical computations machines perform. Some person somehow measured the human brain as running 10^n calculations per second and backed out the date in which we’ll hit that using Moore’s Law.

The idea that humans are simply calculators strikes me as, well, anti-human. No one has a clue what’s going on in a human brain. It’s not the processing power that’s holding anyone back from mapping a brain. The term “neural network” to describe a machine learning architecture was a poor choice. Similarly, we have no idea what’s going on in a mouse’s brain, and according to raw processing power, we should have that cracked by now. If we did, we would just be able to run mouse simulations and put lab mice out of their misery.

I have a lot more to say about the pessimist and optimist crowds which I may explore at a later time. The irony is that the one thing both sides agree on, the singularity being near, may be their biggest mistake.

By Branko Blagojevic on October 8, 2018