by Robby Berman
Credit: josefkubes
Adobe
Stock
could potentially solve problems beyond our grasp.
from the internet, a Pandora's Box if ever there was one.
from limiting the actions of a super-intelligent AI
if it gets out
of control...
Max Planck Institute scientists crash into a computing wall there seems to be no way around
There have been a fair number of voices - Stephen Hawking among them - raised in warning that,
Now a new white paper from scientists at the Center for Humans and Machines at the Max Planck Institute for Human Development presents a series of theoretical tests that confirm the threat...
Due to the basic concepts underlying computing, we would be utterly unable to control a super-intelligent AI.
The white paper (Superintelligence Cannot be Contained - Lessons from Computability Theory) is published in the Journal of Artificial Intelligence Research.
Why worry?
Credit: @nt Adobe Stock
The lure of AI is clear...
Its ability to "see" the patterns in data make it a promising agent for solving problems too complex for us to wrap our minds around.
The possibilities are nearly endless.
Connected to the Internet, AI can grab whatever information it needs to achieve its task, and therein lies a big part of the danger.
With access to every bit of human data, and responsible for its own education,
Even assuming benevolence, there's danger...
Suppose that an AI is confronted by an either/or choice akin to the Trolley Dilemma, maybe even on a grand scale:
A pair of flawed options
Credit: Maxim_Kazmin Adobe Stock
The most obvious way to keep a super intelligent AI from getting ahead of us is to limit its access to information by preventing it from connecting to the internet...
The problem with limiting access to information, though, is that it would make any problem we assign the AI more difficult for it to solve. We would be weakening its problem-solving promise possibly to a point of uselessness.
The second approach that might be taken is to limit what a super-intelligent AI is capable of doing by programming into it certain boundaries.
This might be akin to writer Isaac Asimov's Laws of Robotics, the first of which goes:
Unfortunately, says the study, a series of logical tests reveal that it's impossible to create such limits.
Any such a containment algorithm, it turns out, would be self-defeating.
Containment is impossible
Credit: UncleFredDesign Adobe Stock
The team investigated stacking containment algorithms, with each monitoring the behavior of the previous one, but eventually the same problem arises:
Too smart?
The Planck researchers also concluded that a similar bit of logic makes it impossible for us to know when a self-learning computer's intelligence has come to exceed our own.
Essentially, we're not smart enough to be able to develop tests for intelligence superior to ours.
This means that it's entirely conceivable that an AI capable of self-learning may well quietly ascend to super-intelligence without our even knowing it - a scary reason all by itself to slow down our hurly-burley race to artificial intelligence.
In the end, we're left with a dangerous bargain to make or not make:
|