from Sustensis Website Information sent by Angela Francos
Capability Control Method
He defines these methods in his book "Superintelligence - Paths, Dangers, Strategies" (Bostrom, 2014).
For our purpose I will try to provide a layman's description of what it really means and what are the consequences for controlling the risks emerging from Superintelligence.
The most important point is that these controlling methods must be in place before Superintelligence arrives, i.e. latest in this decade.
Nick Bostrom identifies the 'control problem' as the 'principal-agent problem', a well-known subject in economic and regulatory theory.
The problem can be looked from two perspectives:
He dedicates a whole chapter to identify potential solutions.
Since the publication of the book in 2013, they have been widely discussed in the AI community on how to turn them into practical tools.
Bostrom splits them into two groups:
...which I have tried to
put in as much as possible in layman's terms in the following
subsections.
At some stage there will be an AI project to develop Superintelligence (AGI). It may be launched by one of the big IT/AI companies such as Google, Microsoft, IBM or Amazon.
But it is also quite likely it will be initiated by some wealthy AI backers, which is already happening.
Such sponsors will need to ensure that AI developers carry out the project in accordance with their needs.
They would also want to ascertain that the developers understand their sponsors' needs correctly and that the developed AI product, which may turn into Superintelligence, will also understand and obey humans as expected.
Failure to address this
problem could become an existential risk for Humanity.
Its purpose is to tune
the capabilities of superintelligent agent to the requirements of
humans in such a way that we stay safe and have the ultimate control
on what Superintelligence can do.
It is often proposed that
as long as Superintelligence is physically isolated and restricted,
or "boxed", it will be harmless.
Such superintelligent agent will receive inputs from the external world via its sensors e.g. Wi-Fi, radio communication, chemical compounds, etc.
It will then process those inputs using its processor (computer) and will then respond (output information or perform some action using its actuators).
An example of such an action could be advising on which decision should be made, to switch on or off certain engines, or completing financial transactions.
But they could also be
potentially significant e.g. whether a chemical compound would be
safe for humans at a given dose.
Once the agent becomes superintelligent, it could persuade someone (the human liaison, most likely) to free it from its box and thus it would be out of human control...
There are a number of ways of achieving this goal, some are included in the Bostrom's book, such as:
To counter such possibilities, there are some solutions that would decrease the chance of superintelligent agent escaping the 'Box', such as:
However, as you yourself maybe aware,
It is already being severally thwarted by the rapid spread of Internet of Things (IoT), little gadgets like,
...which could be
controlled at your home while you are away on the other side of the
globe.
Incentive Method
The idea seems to be that if you create the right "incentive environment", then the Superintelligence wouldn't be able to act in an existentially threatening manner.
This is in some way an analogy to how to bring up a child.
So, a good teacher can
motivate his child in such a way that it behaves in morally and
socially acceptable ways.
A good example would be running Superintelligence on a slow hardware, reducing its memory capacity, or limiting the kind of data it can process.
Bostrom argues that the use of stunting poses a dilemma.
Getting the balance just
right could be pretty tricky.
It involves building into any AI development project a set of "tripwires" which, if crossed, will lead to the project being shut down and destroyed.
Bostrom identifies three types of tripwire:
Bostrom thinks that tripwires could be useful, particularly during the development phase if used in conjunction with other methods.
But, unsurprisingly, he also thinks that they too have shortcomings.
He also notes that project developers working on Superintelligence could grow impatient if tripwires repeatedly hamper their progress.
They might undermine any
safety advantage gained by the tripwire system...
|