from
TheGuardian Website for so long that someone born after the attacks is now old enough to go fight there.
Photograph: Getty Images
a more automated form of warfare - one that will greatly increase its capacity
to wage war
everywhere forever...
Weaponised AI is coming - Are algorithmic forever wars our future?
With it came a new
milestone: we've been in Afghanistan for so long that someone born
after the attacks is now old enough to go fight there. They can also
serve in the
six other places where we're
officially at war, not to mention the 133 countries where special
operations forces have
conducted missions in just the
first half of 2018.
Now, the Pentagon is
investing heavily in technologies that will intensify them. By
embracing the latest tools that the tech industry has to offer, the
US military is creating a more automated form of warfare - one that
will greatly increase its capacity to wage war everywhere forever.
JEDI is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger.
The contract is worth as
much as $10bn over 10 years, which is why big tech companies are
fighting hard to win it. (Not Google, however, where a pressure
campaign by workers forced management to
drop out of the running.)
Government IT tends to
run a fair distance behind Silicon Valley, even in a place as
lavishly funded as the Pentagon. With some 3.4 million users and 4
million devices, the defense department's digital footprint is
immense. Moving even a portion of its workloads to a cloud provider
such as Amazon will no doubt improve efficiency.
By pooling the military's
data into a modern cloud platform, and using the machine-learning
services that such platforms provide to analyze that data, Jedi will
help the Pentagon realize its AI ambitions.
In June, the Pentagon established the Joint Artificial Intelligence Center (JAIC), which will oversee the roughly 600 AI projects currently under way across the department at a planned cost of $1.7bn.
And in September, the
Defense Advanced Research Projects Agency (DARPA),
the Pentagon's storied R&D wing,
announced it would be investing up
to $2bn over the next five years into AI weapons research.
This is indeed a
frightening near-future scenario, and a global ban on autonomous
weaponry of the kind sought by the
Campaign to Stop Killer Robots
is absolutely essential.
You don't need algorithms
pulling the trigger for algorithms to play an extremely dangerous
role.
With a military
budget larger than that of China, Russia, Saudi Arabia,
India, France, Britain and Japan
combined, and some 800
bases
around the world, the US has an
abundance of firepower and an unparalleled ability to deploy that
firepower anywhere on the planet.
But who is the enemy in a
conflict with no national boundaries, no fixed battlefields, and no
conventional adversaries?
If
War is a Racket, in the words of
marine legend Smedley Butler, the forever war is one the
longest cons yet.
It's one thing to look at a map of North Vietnam and pick places to bomb. It's quite another to sift through vast quantities of information from all over the world in order to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive.
This is where AI - or,
more precisely, machine learning - comes in. Machine learning can
help automate one of the more tedious and time-consuming aspects of
the forever war: finding people to kill.
Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company's involvement. Maven is the military's "pathfinder" AI project.
Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.
Maven's software automates that work, then relays its discoveries to a human.
So far, it's been a big success:
The goal is to eventually
load the software on to the drones themselves, so they can locate
targets in real time.
Code for America's Jen Pahlka puts it in terms of "sharp knives" versus "dull knives":
In the case of weaponized AI, however, the knives in question aren't particularly sharp.
There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms - algorithms that can't recognize black faces, or that reinforce racial bias in policing and criminal sentencing.
Do we really want the
Pentagon using the same technology to help determine who gets a bomb
dropped on their head?
In 2017 alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these don't suggest a few honest mistakes here and there, but a systemic indifference to "collateral damage".
Indeed, the US government
has repeatedly bombed civilian gatherings such as weddings in the
hopes of killing a high-value target.
The so-called "signature strikes" conducted by the US military and the CIA play similar tricks with the concept of the combatant.
These are drone attacks
on individuals whose identities are unknown, but who are suspected
of being militants based on displaying certain "signatures" - which
can be as vague as being a military-aged male in a particular area.
AI promises to find those enemies faster - even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a (classified) machine-learning model associates with hostile activity.
Call it death by big
data...
But algorithmic warfare will bring big tech deeper into the military-industrial complex, and give billionaires like Jeff Bezos a powerful incentive to ensure the forever war lasts forever.
Enemies will be found.
Money will be made...
|