February 10, 2011
from
Time Website
Photo-Illustration
by Phillip Toledano
for TIME
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret.
He was introduced by the host, Steve
Allen, then he played a short musical composition on a piano. The
idea was that Kurzweil was hiding an unusual fact and the panelists
- they included a comedian and a former Miss America - had to guess
what it was.
They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher. But Kurzweil would spend much of the rest of his career working out what his demonstration meant.
Creating a work of art is one of those
activities we reserve for humans and humans only. It's an act of
self-expression; you're not supposed to be able to do it if you
don't have a self. To see creativity, the exclusive domain of
humans, usurped by a computer built by a 17-year-old is to watch a
line blur that cannot be unblurred, the line between organic
intelligence and artificial intelligence.
He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.
All that horsepower could be put in the
service of emulating whatever it is our brains are doing when they
create consciousness - not just doing arithmetic very quickly or
composing piano music but also driving cars, writing books, making
ethical decisions, appreciating fancy paintings, making witty
observations at cocktail parties.
Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville. Probably.
It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be.
But there are a lot of theories about it.
Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities.
Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011.
This transformation has a name:
the
Singularity.
There's an intellectual gag reflex that
kicks in anytime you try to swallow an idea that involves
super-intelligent immortal cyborgs, but suppress it if you can,
because while the Singularity appears to be, on the face of it,
preposterous, it's an idea that rewards sober, careful evaluation.
And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. The Singularity isn't a wholly new idea, just newish.
In 1965 the
British mathematician I.J. Good described something he called an
"intelligence explosion":
Since the design of machines is one of
these intellectual activities, an ultraintelligent machine could
design even better machines; there would then unquestionably be an
"intelligence explosion," and the intelligence of man would be left
far behind. Thus the first ultraintelligent machine
is the last invention that man need ever make.
In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario.
At a NASA symposium in 1993, Vinge announced that,
By that time Kurzweil was thinking about the Singularity too.
He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind - Stevie Wonder was customer No. 1 - and made innovations in a range of technical fields, including music synthesizers and speech recognition.
He holds 39 patents and 19 honorary
doctorates. In 1999 President Bill Clinton awarded him the National
Medal of Technology.
A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.)
Bill Gates has called him,
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother.
Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it.
His manner is almost apologetic:
Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress.
Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right.
He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years.
It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve:
As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years.
Drawn as graphs, they both made
exponential curves, with their value increasing by multiples of two
instead of by regular increments in a straight line. The curves held
eerily steady, even when Kurzweil extended his backward through the
decades of pretransistor computing technologies like relays and
vacuum tubes, all the way back to 1900.
He kept finding the same thing: exponentially accelerating progress.
Kurzweil calls it the law of accelerating returns:
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity.
According to Kurzweil, we're not evolved to think in terms of exponential growth.
Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s.
By the end of that decade, computers
will be capable of human-level intelligence. Kurzweil puts the date
of the Singularity - never say he's not conservative - at 2045. In
that year, he estimates, given the vast increases in computing power
and the vast reductions in the cost of same, the quantity of
artificial intelligence created will be about a billion times the
sum of all the human intelligence that exists today.
Together they form a movement, a
subculture; Kurzweil calls it a community. Once you decide to take
the Singularity seriously, you will find that you have become part
of a small but intense and globally distributed hive of like-minded
thinkers known as Singularitarians.
But Singularitarians share a worldview.
They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything.
They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality.
When you enter their mind-space you pass
through an extreme gradient in worldview, a hard ontological shear
that separates Singularitarians from the common run of humanity.
Expect turbulence.
Because of the highly interdisciplinary
nature of Singularity theory, it attracts a diverse crowd.
Artificial intelligence is the main event, but the sessions also
cover the galloping progress of, among other fields, genetics and
nanotechnology.
The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading - the practice,
so far mostly theoretical, of establishing politically autonomous
floating communities in international waters - handed out pamphlets.
An android chatted with visitors in one corner.
Biological boundaries that most people
think of as permanent and inevitable Singularitarians see as merely
intractable but solvable problems. Death is one of them. Old age is
an illness like any other, and what do you do with illnesses? You
cure them. Like a lot of Singularitarian ideas, it sounds funny at
first, but the closer you get to it, the less funny it seems. It's
not just wishful thinking; there's actual science going on here.
So why not treat regular non-cancerous cells with telomerase?
In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away.
The mice didn't just get better; they
got younger.
He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine.
Kurzweil takes life extension seriously too.
His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day.
He says his diabetes is essentially
cured, and although he's 62 years old from a chronological
perspective, he estimates that his biological age is about 20 years
younger.
Alternatively, by then we'll be able to
transfer our minds to sturdier vessels such as computers and robots.
He and many other Singularitarians take seriously the proposition
that many people who are alive today will wind up being functionally
immortal.
In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead?
But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves.
Of course, a lot of people think the Singularity is nonsense - a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience.
Most of the serious critics focus on the
question of whether a computer can truly become intelligent.
Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way.
The kind of intelligence Kurzweil is
talking about, which is called strong AI or artificial general
intelligence, doesn't exist yet.
But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon.
The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit.
That makes the ones and zeros that
computers trade in look pretty crude.
Would that mean that the computer was
sentient, the way a human being is? Or would it just be an extremely
sophisticated but essentially mechanical automaton without the
mysterious spark of consciousness - a machine with no ghost in it?
And how would we know?
Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do.
It might not feel like competing with us
for resources. One of the goals of the Singularity Institute is to
make sure not just that artificial intelligence develops but also
that the AI is friendly. You don't have to be a super-intelligent
cyborg to understand that introducing a superior life-form into your
own biosphere is a basic Darwinian error.
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it.
He's tireless in hunting down his
critics so that he can respond to them, point by point, carefully
and in detail.
He refuses to fall on his knees before the mystery of the human brain.
This position doesn't make Kurzweil an outlier, at least among Singularitarians.
Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer.
So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons.
Markram has said that he hopes to have a
complete virtual human brain up and running in 10 years. (Even
Kurzweil sniffs at this. If it worked, he points out, you'd then
have to educate the brain, and who knows how long that would take?)
He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware.
In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level.
Progress hyper-accelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all.
Kurzweil hopes to bring his dead father
back to life.
Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics.
Now we have iPhones. Is it an
unimaginable step to take the iPhones out of our hands and put them
into our skulls?
Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter.
It got every question it answered right,
but much more important, it didn't need help understanding the
questions (or, strictly speaking, the answers), which were phrased
in plain English. Watson isn't strong AI, but if strong AI happens,
it will arrive gradually, bit by bit, and this will have been one of
the bits.
Nothing gets old as fast as the future.
Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago.
Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box.
Or maybe you have to think further
inside it than anyone ever has before.
|