by Sigal Samuel
September 26, 2024
from
VOX Website
Sigal Samuel
is a senior reporter for
Vox's Future Perfect and co-host of the Future Perfect
podcast.
She writes primarily about the future of
consciousness, tracking advances in artificial
intelligence and neuroscience and their staggering
ethical implications.
Before joining Vox, Sigal
was the religion editor at the Atlantic. |
Sam Altman.
Aaron
Schwartz/Xinhua
via Getty
Images
OpenAI
as we knew it is dead.
The maker of ChatGPT
promised to share its profits
with the public.
But Sam Altman
just sold you out...
Sam Altman is a Technocrat who plays the long game.
When he first got involved with OpenAI in 2015, he jockeyed
to take the helm in 2019 but he was constrained by a board
of directors.
By
2023, the Board saw through him and ousted him.
Then the real Sam Altman appeared as he "clawed his way back
to power", kicked the directors off the board and
reconstituted another board subservient to him.
By 2024, top executives saw through Altman's Hitleresque
schemes and fled the company, leaving the safety team in
shambles.
Just this week, the company's Chief Technology Office (CTO),
Mira Murati, abruptly walked out.
Now VOX notes,
"he's stripped the board
of its control entirely" and taken dictatorial control
of OpenAI.
Read the original and simple Charter below. This was
originally signed by Altman.
Since then he has violated every word of it.
OpenAI Charter
Our Charter describes the
principles we use to execute on OpenAI's mission.
This document reflects the strategy we've refined over
the past two years, including feedback from many people
internal and external to OpenAI.
The timeline to AGI
remains uncertain, but our Charter will guide us in
acting in the best interests of humanity throughout its
development.
OpenAI's mission is to ensure that artificial general
intelligence (AGI) - by which we mean highly autonomous
systems that outperform humans at most economically
valuable work - benefits all of humanity.
We will attempt to
directly build safe and beneficial AGI, but will also
consider our mission fulfilled if our work aids others
to achieve this outcome.
To that end, we commit to
the following principles:
Broadly distributed
benefits
We commit to use any influence we obtain over AGI's
deployment to ensure it is used for the benefit of all,
and to avoid enabling uses of AI or AGI that harm
humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity.
We anticipate needing to
marshal substantial resources to fulfill our mission,
but will always diligently act to minimize conflicts of
interest among our employees and stakeholders that could
compromise broad benefit.
Long-term safety
We are committed to doing the research required to make
AGI safe, and to driving the broad adoption of such
research across the AI community.
We are concerned about late-stage AGI development
becoming a competitive race without time for adequate
safety precautions.
Therefore, if a
value-aligned, safety-conscious project comes close to
building AGI before we do, we commit to stop competing
with and start assisting this project.
We will work out specifics
in case-by-case agreements, but a typical triggering
condition might be "a better-than-even chance of success
in the next two years."
Technical
leadership
To be effective at addressing AGI's impact on society,
OpenAI must be on the cutting edge of AI capabilities -
policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact
before AGI, and we'll strive to lead in those areas that
are directly aligned with our mission and expertise.
Cooperative orientation
We will actively cooperate with other research and
policy institutions; we seek to create a global
community working together to address AGI's global
challenges.
We are committed to providing public goods that help
society navigate the path to AGI.
Today this includes
publishing most of our AI research, but we expect that
safety and security concerns will reduce our traditional
publishing in the future, while increasing the
importance of sharing safety, policy, and standards
research.
Altman has proven himself to be shrewd, unprincipled and
ruthless, who will get what he wants regardless of who gets
crushed along the way.
Now he has sole possession of the most dangerous weapon
since the atomic bomb - 'Artificial General Intelligence' (AGI).
What could go wrong?
Source
OpenAI, the company that brought you
ChatGPT, just sold you out.
Since its founding in 2015, its leaders have
said their top
priority is making sure artificial intelligence is developed
safely and beneficially.
They've touted the company's unusual
corporate structure as a way of proving the purity of its
motives.
OpenAI was a nonprofit controlled not by its
CEO or by its shareholders, but by a board with a single
mission:
keep humanity safe...
But this week, the
news broke that OpenAI will no longer be controlled by the
nonprofit board. OpenAI is turning into a full-fledged
for-profit benefit corporation.
Oh, and CEO
Sam Altman, who had previously
emphasized that he didn't have any equity in the company, will
now get equity worth billions, in addition to ultimate control
over OpenAI.
In an announcement that hardly seems
coincidental, chief technology officer Mira Murati said
shortly before that news broke that she was leaving the company.
Employees were so blindsided that many of
them reportedly
reacted to her abrupt departure with a "WTF" emoji in Slack.
WTF indeed...
The whole point of OpenAI was to be nonprofit
and safety-first.
It began sliding away from that vision years
ago when, in 2019, OpenAI
created a for-profit arm so it could rake in the kind of
huge investments it needed from Microsoft as the costs of
building advanced AI scaled up.
But some of its employees and outside
admirers still held out hope that the company would stick to its
principles.
That hope can now be put to bed.
"We can say goodbye to the original
version of OpenAI that wanted to be unconstrained by
financial obligations," Jeffrey Wu, who joined the company
in 2018 and worked on early models like GPT-2 and GPT-3,
told me.
"Restructuring around a core for-profit
entity formalizes what outsiders have known for some time:
that OpenAI is seeking to profit in an industry that has
received an enormous influx of investment in the last few
years," said Sarah Kreps, director of Cornell's Tech Policy
Institute.
The shift departs from OpenAI's,
"founding emphasis on safety,
transparency and an aim of not concentrating power."
And if this week's news is the final death
knell for OpenAI's lofty founding vision, it's clear who killed
it.
How Sam Altman
became an existential risk to OpenAI's mission
When OpenAI was cofounded in 2015 by
Elon
Musk (along with Altman and others), who was worried that AI
could pose an existential risk to humanity, the budding research
lab
introduced itself to the world with these three sentences:
OpenAI is a nonprofit artificial
intelligence research company.
Our goal is to advance digital
intelligence in the way that is most likely to benefit
humanity as a whole, unconstrained by a need to generate
financial return.
Since our research is free from financial
obligations, we can better focus on a positive human impact.
All of that is objectively false now...
Since Altman took the helm of OpenAI in 2019,
the company has been drifting from its mission.
That year, the company - meaning the original
nonprofit - created a for-profit subsidiary so it could pull in
the huge investments needed to build cutting-edge AI.
But it did something
unprecedented in Silicon Valley:
It capped how much profit investors could
make.
They could get up to 100 times what they put
in, but beyond that, the money would go to the nonprofit, which
would use it to benefit the public.
For example, it could fund a universal basic
income program to help people adjust to automation-induced
joblessness.
Over the next few years, OpenAI increasingly
deprioritized its focus on safety as it
rushed to
commercialize products. By 2023, the nonprofit board had
grown so suspicious of Altman that it
tried to oust him.
But he quickly clawed his way back to power,
exploiting his relationship
with Microsoft, with a new board
stacked in his favor. And earlier this year,
OpenAI's safety team imploded as staffers lost faith in
Altman and quit the company.
Now, Altman has taken the final step in
consolidating his power:
He's stripped the board of its control
entirely.
Although it will still exist, it won't have
any teeth.
"It seems to me the original nonprofit
has been disempowered and had its mission reinterpreted to
be fully aligned with profit," Wu said.
Profit may be what Altman feels the company
desperately needs.
Despite a
supremely confident blog post published this week, in which
he claimed that AI would help with,
"fixing the climate, establishing a space
colony, and the discovery of all of physics," OpenAI is
actually in a jam.
It's been struggling to find a clear route to
financial success for its models, which cost hundreds of
millions - if not billions - to build.
Restructuring the business into a for-profit
could help attract investors.
But the move has some observers including
Musk
himself - asking:
How could this possibly be legal...?
If OpenAI does away with the profit cap, it
would be redirecting a huge amount of money - prospective
billions of dollars in the future - from the nonprofit to
investors.
Because the nonprofit is there to represent
the public, this would effectively mean shifting billions away
from people like you and me.
As some are
noting, it feels a lot like theft.
"If OpenAI were to retroactively remove
profit caps from investments, this would in effect transfer
billions in value from a non-profit to for-profit
investors," Jacob Hilton, a former employee of OpenAI who
joined before it transitioned from a nonprofit to a
capped-profit structure.
"Unless the non-profit were appropriately
compensated, this would be a money grab.
In my view, such a thing would be
incompatible with OpenAI's charter, which states that
OpenAI's primary fiduciary duty is to humanity, and I do not
understand how the law could permit it."
But because OpenAI's structure is so
unprecedented, the legality of such a shift might seem confusing
to some. And that may be exactly what the company is counting
on.
Asked to comment on this, OpenAI said only to
refer to its statement in
Bloomberg.
There, a company spokesperson said OpenAI
remains,
"focused on building AI that benefits
everyone," adding that "the nonprofit is core to our mission
and will continue to exist."
The
take-home message is clear: Regulate, regulate, regulate
Advocates for AI safety have been arguing
that we need to pass regulation that would provide some
oversight of big AI companies - like
California's SB 1047 bill, which Gov. Gavin Newsom must
either sign into law or veto in the next few days.
Now, Altman has neatly made their case for
them.
"The general public and regulators should
be aware that by default, AI companies will be incentivized
to disregard some of the costs and risks of AI deployment -
and there's a chance those risks will be enormous," Wu said.
Altman is also validating the concerns of his
ex-employees who published a proposal demanding that employees
at major AI companies be allowed a "right to warn" about
advanced AI.
Per the proposal:
"AI companies have strong financial
incentives to avoid effective oversight, and we do not
believe bespoke structures of corporate governance are
sufficient to change this."
Obviously, they were right:
OpenAI's
nonprofit was meant to reign over the for-profit arm, but Altman
just flipped that structure upside down.
After years of sweet-talking the press, the
public, and the policymakers in Congress, assuring all that OpenAI
wants regulation and cares more about safety than about money,
Altman is not even bothering to play games anymore.
He's showing everyone his true colors.
Governor Newsom, are you seeing
this...?
Congress, are you seeing this...?
World, are you seeing this...?
|