We may not have much time until life as we know it is over.
So we need to place a bet on something that can save us.
Since the stakes are so high, we should ante up and go all in on our bet.
from VOX Website
It's no accident...
The intertwining of religion and technology is centuries old...
If I told you all this, would you assume that I was a religious
preacher or an AI researcher?
Most secular technologists who are
building
AI just don't recognize that.
We can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights he's influenced in Silicon Valley...
These visions are almost identical to the visions of Christian eschatology, the branch of theology that deals with the "end times" or the final destiny of humanity...
Christian eschatology tells us that we're all headed toward the "four last things":
Although everyone who's ever lived so far has died, we'll be resurrected after the second coming of Christ to find out where we'll live for all eternity.
Our souls will face a final judgment, care of God, the perfect decision-maker...
Five years ago, when I began attending conferences in Silicon Valley and first started to notice parallels like these between religion talk and AI talk, I figured there was a simple psychological explanation.
Both were a response to core human anxieties:
Religious thinkers and AI thinkers had simply stumbled upon similar answers to the questions that plague us all.
So I was surprised to learn that the connection goes much deeper.
In fact, historians tracing the influence of religious ideas contend that we can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights he's influenced in Silicon Valley.
Occasionally, someone there still dimly senses the parallels.
Mostly, though, the figures spouting a vision of AGI as a kind of techno-eschatology - from Sam Altman, the CEO of ChatGPT-maker OpenAI, to Elon Musk, who wants to link your brain to computers - express their ideas in secular language.
They're either unaware or unwilling to admit that the vision they're selling derives much of its power from the fact that it's plugging into age-old religious ideas.
Instead, we should understand the history of these ideas - of virtual afterlife as a mode of salvation, say, or moral progress understood as technological progress - so we see that they're not immutable or inevitable:
We don't have to fall prey to the danger of the single story.
The idea of AI has always been deeply religious
In the Abrahamic religions that shaped the West, it all goes back to shame.
Remember what happens in the book of Genesis?
When Adam and Eve eat from the tree of knowledge, God expels them from the garden of Eden and condemns them to all the indignities of flesh-and-blood creatures:
Humankind is never the same after that fall from grace. Before the sin, we were perfect creatures made in the image of God; now we're miserable meat sacks.
But in the Middle Ages, Christian thinkers developed a radical idea, as the historian David Noble explains in his book The Religion of Technology.
The influential ninth-century philosopher John Scotus Eriugena, for example, insisted that part of what it meant for Adam to be formed in God's image was that he was a creator, a maker.
So if we wanted to restore humanity to the God-like perfection of Adam prior to his fall, we'd have to lean into that aspect of ourselves.
Eriugena wrote that,
The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity...
This idea took off in medieval monasteries, where the motto "ora et labora" - prayer and work - began to circulate.
Even in the midst of the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions like the first known tidal-powered water wheel and impact-drilled well.
Catholics became known as innovators; to this day, engineers have four patron saints in the religion.
There's a reason why some say the Catholic Church was the Silicon Valley of the Middle Ages...:
This wasn't tech for tech's sake, or for profit's sake.
By recovering humanity's original perfection, we could usher in the kingdom of God.
As Noble writes,
The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity.
A pair of Bacons illustrates how the same core belief - that tech would accomplish redemption - influenced both religious traditionalists and those who adopted a scientific worldview.
In the 13th century, the alchemist Roger Bacon, taking a cue from biblical prophecies, sought to create an elixir of life that could achieve something like the Resurrection as the apostle Paul described it.
The elixir, Bacon hoped, would give humans not just immortality, but also magical abilities like traveling at the speed of thought.
Then in the 16th century, Francis Bacon (no relation) came along.
Superficially he seemed very different from his predecessor - he critiqued alchemy, considering it unscientific - yet he prophesied that we'd one day use tech to overcome our mortality,
In the stories, the golem sometimes offers salvation by saving the Jewish community from persecution. But other times, the golem goes rogue, killing people and using its powers for evil.
If all of this is sounding distinctly familiar - well, it should.
The golem idea has been cited in works on AI risk, like the 1964 book God & Golem, Inc. by mathematician and philosopher Norbert Wiener.
You hear the same anxieties today in the slew of open letters released by technologists, warning that AGI will bring upon us either salvation or doom.
Reading these statements, you might well ask:
For an answer to that, come with me on one more romp through history, and we'll start to see how the recent rise of three intertwined movements have molded Silicon Valley's visions for AI.
Enter transhumanism, effective altruism, and longtermism
A lot of people assume that when Charles Darwin published his theory of evolution in 1859, all religious thinkers instantly saw it as a horrifying, heretical threat, one that dethroned humans as God's most godly creations.
But some Christian thinkers embraced it as gorgeous new garb for the old spiritual prophecies.
A prime example was Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 1900s.
He believed that human evolution, nudged along with tech, was actually the vehicle for bringing about the kingdom of God, and that the melding of humans and machines would lead to an explosion of intelligence, which he dubbed the omega point.
Our consciousness would become "a state of super-consciousness" where we merge with the divine and become a new species.
Teilhard influenced his pal Julian Huxley, an evolutionary biologist who was president of both the British Humanist Association and the British Eugenics Society, as author Meghan O'Gieblyn documents in her 2021 book God, Human, Animal, Machine.
It was Huxley who popularized Teilhard's idea that we should use tech to evolve our species, calling it "transhumanism."
That, in turn, influenced the futurist Ray Kurzweil, who made basically the same prediction as Teilhard:
Only instead of calling it the omega point, Kurzweil rebranded it as the "singularity."
Kurzweil has copped to the spiritual parallels, and so have those who've formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatt's Terasem movement to the Mormon Transhumanist Association to Anthony Levandowski's short-lived Way of the Future church.
But many others, such as Oxford philosopher Nick Bostrom, insist that unlike religion, transhumanism relies on,
These days, transhumanism has a sibling, another movement that was born in Oxford and caught fire in Silicon Valley:
Effective altruists also say their approach is rooted in secular reason and evidence.
Yet EA actually mirrors religion in many ways:
Most importantly for our purposes,
EA's eschatology comes in the form of its most controversial idea, longtermism, which Musk has described as,
It argues that the best way to help the most people is to focus on ensuring that humanity will survive far into the future (as in, millions of years from now), since many more billions of people could exist in the future than in the present - assuming our species doesn't go extinct first.
And here's where we start to get the answer to our question about why technologists are set on building AGI...
AI progress as moral progress
To effective altruists and longtermists, just sticking with narrow AI is not an option.
Take Will MacAskill, the Oxford philosopher known as the "reluctant prophet" of effective altruism and longtermism.
In his 2022 book What We Owe the Future, he explains why he thinks a plateauing of technological advancement is unacceptable.
He cites his colleague Toby Ord, who estimates that the probability of human extinction through risks like rogue AI and engineered pandemics over the next century is one in six:
Another fellow traveler in EA, Holden Karnofsky, likewise argues that we're living at the "hinge of history" or the "most important century" - a singular time in the story of humanity when we could either flourish like never before or bring about our own extinction.
MacAskill, like Musk, suggests in his book that a good way to avoid extinction is to settle on other planets so we aren't keeping all our eggs in one basket.
As MacAskill's Oxford colleague Bostrom has argued,
The more space, the more happy (digital) humans!
This is where the vast majority of moral value lies:
When we put all these ideas together and boil them down, we get this basic proposition:
Any student of religion will immediately recognize this for what it is: apocalyptic logic.
Transhumanists, effective altruists, and longtermists have inherited the view that the end times are nigh and that technological progress is our best shot at moral progress.
For people operating within this logic, it seems natural to pursue AGI.
Even though they view AGI as a top existential risk, they believe we can't afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence (which will surely end any minute!) and into a flourishing interstellar adulthood (so many happy people, so much moral value!).
But is this rooted in reason and evidence? Or is it rooted in dogma?
The hidden premise here is technological determinism, with a side dash of geopolitics. Even if you and I don't create terrifyingly powerful AGI, the thinking goes, somebody else or some other country will - so why stop ourselves from getting in on the action?
OpenAI's Altman exemplifies the belief that tech will inevitably march forward.
He wrote on his blog in 2017 that,
Why?
Have we learned that...?
As AI Impacts lead researcher Katja Grace memorably wrote,
It seems more likely that people tend to pursue innovations when there are very powerful economic, social, or ideological pressures pushing them to.
In the case of the AGI fever that's gripped Silicon Valley, recycled religious ideas - in the garb of transhumanism, effective altruism, and longtermism - have supplied the social and ideological pressures.
As for the economic, profit-making pressure, well, that's always operative in Silicon Valley.
Now, 61 percent of Americans believe AI may threaten human civilization, and that belief is especially strong among evangelical Christians, according to a Reuters/Ipsos poll in May.
To Geraci, the religious studies scholar, that doesn't come as a surprise.
Apocalyptic logic, he noted, is,
...to the point that 4 in 10 US adults currently believe that humanity is living in the end times.
Unfortunately, apocalyptic logic tends to breed dangerous fanaticism.
Today, with talk of AGI doom suffusing the media, true believers drop out of college to go work on AI safety.
In an interview with me last year, MacAskill disavowed extreme gambles.
He told me he imagines that a certain type of Silicon Valley tech bro, thinking there's a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI.
When MacAskill told me this, I pictured a Moses figure, looking out over the promised land but knowing he would not reach it.
The longtermist vision seemed to require of him a brutal faith:
We need to decide if this is the type of salvation we want
There's nothing inherently wrong with believing that tech can radically improve humanity's lot. In many ways, it obviously already has.
...Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me.
In fact, Delio is comfortable with the idea that we're already in a new stage of evolution,
She thinks we should be open-minded about proactively evolving our species with tech's help.
"We're already at a point when power is consolidated in a way that doesn't even give us the option to collectively suggest that AGI should not be pursued"...
But she's also clear that we need to be explicit about which values are shaping our tech,
Otherwise,
Geraci agrees.
Part of making deliberate decisions about which values animate tech is also being keenly aware of who gets the power to decide.
According to Schwarz, the architects of artificial intelligence have sold us on a vision of necessary tech progress with AI and set themselves up as the only experts on it, which makes them enormously powerful - arguably more powerful than our democratically elected officials.
We got to this point in large part because, for the past thousand years, the West has fallen prey to the danger of the single story:
That narrative has made us inclined to defer to technologists (who, in the past, were also spiritual authorities) on the values and assumptions being baked into their products.
We need to decide what kind of "salvation" we want.
We can, as Noble put it,
|