| 
			  
			
			
 
  by Steven Metz
 June 10, 2016
 
			from
			
			WorldPoliticsReview Website 
			
 
 
 
  Navy Rear Adm. Mat Winter, left, and Navy Adm. Jonathan Greenert
 
			with the Navy-sponsored Shipboard Autonomous Firefighting Robot, 
			 
			Washington, Feb. 4, 2015 
			 
			(Department of Defense photo).
 
			  
				
				"Fifteen years after a drone first fired 
			missiles in combat," journalist Josh Smith
				
				recently wrote from Afghanistan, 
				"the U.S. military's drone program has expanded far beyond 
				specific strikes to become an everyday part of the war machine."
				 
			Important as this is, it is only 
			a first step in a much bigger process.    
			As a report co-authored in 
			January 2014 by Robert Work and Shawn Brimley
			
			put it, 
			 
				
				"a move to an entirely new war-fighting regime in which 
			unmanned and autonomous systems play central roles" has begun.
				 
			Where 
			this ultimately will lead is unclear.
 Work, who went to become the deputy secretary of defense in May 
			2014, and Brimley represent one school of thought about robotic war. 
			Drawing from a body of ideas about military revolutions from the 
			1990s, they contend that roboticization is inevitable, largely 
			because it will be driven by advances in the private sector.
 
			  
			Hence 
			the United States military must embrace and master it rather than 
			risk having enemies do so and gain an advantage. 
 On the other side of the issue are activists who want to stop the 
			development of military robots. For instance the United Nations 
			Human Rights Council
			
			has called for a moratorium on lethal autonomous systems.
 
			  
			Nongovernmental organizations have created what they call the
			
			Campaign to Stop Killer Robots, which is modeled on recent 
			efforts to ban land mines and cluster munitions.  
			  
			Other groups and 
			organizations share this perspective.
 Undoubtedly the political battle between advocates and opponents of 
			military robots will continue. However, regardless of the outcome of 
			that battle, developments in the next decade will already set the 
			trajectory for the future and have cascading effects.
 
			  
			At several 
			points, autonomous systems will cross a metaphorical Rubicon from 
			which there is no turning back. 
 
			  
				
					
					
					- 
			One such Rubicon is when some nation deploys a robot that can decide 
			to kill a human based on programmed instructions and an algorithm 
			rather than a direct instruction from an operator.    
					In military 
			parlance, these would be robots without "a human in the loop."
 In a sense, this would not be entirely new:
 
						
						Booby traps and mines 
			have killed without a human pulling the trigger for millennia.
						 
					But 
			the idea that a machine would make something akin to a decision 
			rather than simply killing any human that comes close to it adds 
			greater ethical complexity than a booby trap or mine, where the 
			human who places it has already taken the ethical decision to kill.
   
					
					
					"Creating autonomous military 
			robots 
					
					
					that can act at least as 
			ethically as human soldiers 
					
					
					appears to be a sensible goal."    
					In Isaac Asimov's science fiction 
			collection "I, Robot," which was one of the earliest attempts to 
			grapple with the ethics of autonomous systems, an ironclad rule 
			programmed into all such machines was that, 
						
						"a robot may not injure a human 
				being."  
					Clearly that is an unrealistic boundary, but as an 
			important
					
					2008 report sponsored by the U.S. Navy argued, 
					 
						
						"Creating autonomous military robots 
				that can act at least as ethically as human soldiers appears to 
				be a sensible goal."  
					Among the challenges 
			to meeting this goal that the report's authors identified, 
					 
						
						"creating a robot that can properly 
				discriminate among targets is one of the most urgent." 
					In other words, the key is not the technology for 
			killing, but the programmed instructions and algorithms.    
					But that 
			also makes control extraordinarily difficult, since programmed 
			instructions can be changed remotely and in the blink of an eye, 
			instantly transforming a benign robot into a killer.
 
					
					A second Rubicon will be crossed when non-state entities field 
			military robots.    
					Since most of the technology for military robots 
			will arise from the private sector, anyone with the money and 
			expertise to operate them will be able to do so.    
					That includes, 
						
					 
					Even if efforts to control the use of robots by state 
			militaries in the form of international treaties are successful, 
			there would be little to constrain non-state entities from using 
			them.    
					Nations constrained by treaties could be at a disadvantage 
			when facing non-state enemies that are not. 
 
					
					A third Rubicon will be crossed when autonomous systems are no 
			longer restricted to being temporary mobile presences that enter a 
			conflict zone, linger for a time, then leave, but are an enduring 
			presence on the ground and in the water, as well as in the air, for 
			the duration of an operation.    
					Pushing this idea even further, some 
			experts believe that military robots will not be large, complex 
			autonomous systems, but swarms of small, simple machines networked 
			for a common purpose. Like an insect swarm, this type of robot could 
			function even if many of its constituent components were destroyed 
			or broke down.    
					Swarming autonomous networks
					
					would represent one of the most profound changes in the history 
			of armed conflict.   
					In his seminal 2009 book "Wired 
			for War," Peter Singer wrote,  
						
						"Robots may not be poised to revolt, 
				but robotic technologies and the ethical questions they raise 
				are all too real."  
					This makes it vital to understand the 
			points of no return.    
					Even that is only a start: 
					 
						
						Knowing that the 
			Rubicon has been crossed does not alone tell what will come next. 
						 
			When Caesar and his legion crossed 
			
			the Rubicon River in 49 B.C., 
			everyone knew that some sort of conflict was inevitable.  
			  
			But no one 
			could predict Caesar's victory, much less his later assassination 
			and all that it brought. Although the parameters of choice had been 
			bounded, much remained to be determined.
 Similarly, Rubicon crossings by military robots are inevitable, but 
			their long-term outcomes will remain unknown.
 
			  
			It is therefore vital 
			for the global strategic community, including governments and 
			militaries as well as scholars, policy experts, ethicists, 
			technologists, nongovernmental organizations and international 
			organizations to undertake a collaborative campaign of learning and 
			public education.  
			  
			Political leaders must engage the public on this 
			issue without hysteria or hyperbole, identifying all the alternative 
			scenarios for who might use military robots, where they might use 
			them, and what they might use them for.  
			  
			With such a roadmap, it 
			might be possible for political leaders and military officials to 
			push roboticization in a way that limits the dangers, rather than 
			amplifying them. 
			  
			    |