by Sigal Samuel
October 11, 2024
from
VOX Website
Sigal Samuel
is a
senior reporter for Vox's Future Perfect and co-host of
the Future Perfect podcast.
She
writes primarily about the future of consciousness,
tracking advances in artificial intelligence and
neuroscience and their staggering ethical implications.
Before joining Vox, Sigal was the religion editor at the
Atlantic. |
Getty Images
If AI Companies
are Trying to Build God,
shouldn't they get
our Permission First?
The public did not consent
to artificial general
intelligence.
The rise of artificial
intelligence, explained
AI companies are on a mission to radically change our world.
They're working on building machines that could
outstrip human intelligence and unleash a dramatic economic
transformation on us all.
Sam Altman, the CEO of ChatGPT-maker
OpenAI, has basically told us he's trying to build a god - or "magic
intelligence in the sky," as he puts it. OpenAI's official
term for this is artificial general intelligence, or AGI.
Altman says that AGI will not only "break
capitalism" but also that it's "probably
the greatest threat" to the continued existence of humanity.
There's a very natural question here:
Did anyone actually ask for this kind of
AI?
By what right do a few powerful tech CEOs
get to decide that our whole world should be turned upside
down?
As I've written before, it's clearly undemocratic that
private companies are building tech that aims to totally change
the world without seeking buy-in from the public.
In fact, even leaders at the major companies
are expressing unease about how undemocratic it is.
Jack Clark, the co-founder of the AI
company Anthropic,
told Vox last year that it's,
"a real weird thing that this is not a
government project."
He also
wrote that there are several key things he's,
"confused and uneasy" about, including,
"How much permission do AI developers need to get from
society before irrevocably changing society...?"
Clark continued:
Technologists have always had something
of a libertarian streak, and this is perhaps best epitomized
by the 'social media' and Uber et al era of the 2010s -
vast, society-altering systems ranging from social networks
to rideshare systems were deployed into the world and
aggressively scaled with little regard to the societies they
were influencing.
This form of permissionless invention is
basically the implicitly preferred form of development as
epitomized by Silicon Valley and the general 'move fast and
break things' philosophy of tech. Should the same be true of
AI?
I've noticed that when anyone questions that
norm of "permissionless invention," a lot of tech enthusiasts
push back.
Their objections always seem to fall into one
of three categories. Because this is such a perennial and
important debate, it's worth tackling each of them in turn - and
why I think they're wrong.
Objection 1 -
"Our use is our consent"
ChatGPT is the fastest-growing consumer
application in history: It had
100 million active users just two months after it launched.
There's no disputing that lots of people
genuinely found it really cool.
And it spurred the release of other
chatbots, like Claude, which all sorts of people are getting
use out of - from journalists to coders to busy parents who
want someone (or something) else to make the goddamn grocery
list.
Some claim that this simple fact - we're
using the AI! - proves that people consent to what the major
companies are doing.
This is a common claim, but I think it's very
misleading. Our use of an AI system is not tantamount to
consent. By "consent" we typically mean informed consent, not
consent born of ignorance or coercion.
Much of the public is not informed about the
true costs and benefits of these systems.
How many people are aware, for instance, that
generative AI sucks up so much energy that
companies like Google and Microsoft are reneging on their
climate pledges as a result?
Plus, we all live in choice environments that
coerce us into using technologies we'd rather avoid.
Sometimes we "consent" to tech because we
fear we'll be at a professional disadvantage if we don't use
it.
Think about social media.
I would personally not be on X (formerly
known as Twitter) if not for the fact that it's seen as
important for my job as a journalist.
In a recent
survey, many young people said they wish social media
platforms were never invented, but given that these
platforms do exist, they feel pressure to be on them.
Even if you think someone's use of a
particular AI system does constitute consent, that doesn't mean
they consent to the bigger project of building AGI.
This brings us to an important distinction:
There's narrow AI - a system that's
purpose-built for a specific task (say, language
translation) - and then there's AGI.
Narrow AI can be fantastic!
It's helpful that AI systems can perform a
crude copy edit of your work for free or let you write computer
code using just plain English. It's awesome that AI is helping
scientists better understand disease.
And it's extremely awesome that
AI cracked the protein-folding problem - the challenge of
predicting which 3D shape a protein will fold into - a puzzle
that stumped biologists for 50 years.
The Nobel Committee for Chemistry clearly
agrees:
It just gave a Nobel prize to
AI pioneers for enabling this breakthrough, which will
help with drug discovery.
But that is different from the attempt to
build a general-purpose reasoning machine that outstrips humans,
a,
"magic intelligence in the sky."
While plenty of people do want narrow AI,
polling shows that most Americans do not want AGI.
Which brings us to...
Objection 2 - "The public is too ignorant to tell innovators
how to innovate"
Here's a quote commonly (though
dubiously) attributed to car-maker Henry Ford:
"If I had asked people what they wanted,
they would have said faster horses."
The claim here is that there's a good reason
why genius inventors don't ask for the public's buy-in before
releasing a new invention:
Society is too ignorant or
unimaginative to know what good innovation
looks like.
From the printing press and the telegraph to
electricity and the internet, many of the great technological
innovations in history happened because a few individuals
decided on them by fiat.
But that doesn't mean deciding by fiat is
always appropriate.
The fact that society has often let inventors
do that may be partly because of technological solutionism,
partly because of a belief in the "great man" view of history,
and partly because, well, it would have been pretty hard to
consult broad swaths of society in an era before mass
communications - before things like a printing press or a
telegraph!
And while those inventions did come with
perceived risks and
real harms, they didn't pose the threat of wiping out
humanity altogether or making us subservient to a different
species.
For the few technologies we've invented so
far that meet that bar, seeking democratic input and
establishing mechanisms for global oversight have been
attempted, and rightly so.
It's the reason we have a Nuclear
Nonproliferation Treaty and a Biological Weapons
Convention -
treaties that, though it's a struggle to implement them
effectively, matter a lot for keeping our world safe.
It's true, of course, that most people don't
understand the nitty-gritty of AI.
So, the argument here is not that the public
should be dictating the minutiae of AI policy. It's that it's
wrong to ignore the public's general wishes when it comes to
questions like,
As Daniel Colson, the executive
director of the nonprofit AI Policy Institute,
told me last year,
"Policymakers shouldn't take the
specifics of how to solve these problems from voters or the
contents of polls.
The place where I think voters are
the right people to ask, though, is:
What do you want out of policy?
And what direction do you want
society to go in?"
Objection 3 -
"It's impossible to curtail innovation anyway"
Finally, there's the technological
inevitability argument, which says that you can't halt the march
of technological progress - it's unstoppable!
This is
a myth...!
In fact,
there are lots of technologies that we've decided not to build,
or that we've built but placed very tight restrictions on.
Just think of
human cloning
or human germline
modification.
The recombinant DNA researchers behind
the Asilomar Conference of 1975 famously organized a
moratorium
on certain experiments.
We are, notably, still not cloning
humans.
Or think of the 1967 Outer Space Treaty.
Adopted by the United Nations against the
backdrop of the Cold War, it barred nations from doing
certain things in space - like storing their nuclear weapons
there.
Nowadays, the treaty comes up in debates
about whether we should
send messages into space with the hope of
reaching extraterrestrials.
Some argue that's dangerous because an
alien species, once aware
of us, might conquer and "oppress" us.
Others argue it'll be great - maybe the
aliens will gift us their knowledge in the form of an
Encyclopedia Galactica...!
Either way, it's clear that the stakes are
incredibly high and all of human civilization would be affected,
prompting some to
make the case for democratic deliberation before intentional
transmissions are sent into space.
As the old Roman proverb goes:
What touches all should be decided by
all.
That is as true of superintelligent AI as it
is of nukes, chemical weapons, or interstellar broadcasts.
|