By: Noah Gibbs (Guest Writer)

Artificial intelligence (AI) is poised to drastically change our lives. Driverless cars are changing how we move from place to place, autonomous delivery drones are changing how we shop, and AI algorithms influence who we think about dating. The proliferation of AI is likely to touch every aspect of our lives, including how we fight our wars.

In October 2012, a group of NGOs formed the “Campaign to Stop Killer Robots” – the purpose of which is to promote an international ban on fully autonomous weapons. As of November 2018, 28 countries supported such a ban. Among these states, Austria is the only European nation in favor of a comprehensive autonomous weapons ban.

While the objective of the Campaign to Stop Killer Robots is noble in that it seeks to prevent human suffering, a ban on autonomous weapons is the wrong way to deal with the challenges related to the militarization of artificial intelligence. One problem with a ban is that it is nearly impossible to define autonomous weapons in a way that is universally accepted. This is because weapons can have varying degrees of autonomy. For example, a heat-seeking air-to-air missile will fly towards a heat source within its field of view. Once the missile is fired, the human operator has little control over which heat source it might target. The missile ‘chooses’ what it will destroy. Despite this, few people would call a heat-seeking missile intelligent, even though it exhibits a degree of autonomous decision-making. Robots equipped with sophisticated AI are a different story. With the right AI, a robot may be capable of identifying a target as specific as a person’s face and deciding whether to engage all by itself. Weapons that display such behavior are said to be fully autonomous.

Unfortunately, there is no clear dividing line between fully autonomous weapons and semi-autonomous weapons. Military drones provide an excellent example of this problem. Today, most military drones are flown remotely by a pilot sitting somewhere on the ground. Decisions to launch an attack from a drone are always made by the human pilot. AI could change this dynamic by allowing a commander to order a drone to attack anything that qualifies as a target within a given area. The drone could then loiter over an area searching for targets that it can then decide to attack by itself. Hardware wise, both types of drones are the same. The only thing that separates a drone flown by a human from a drone that can make decisions itself is software.

 

 

Software is, by nature, impossible to observe unless you have access to the computer used to program the autonomous weapon. As such, verifying an autonomous weapons ban is exceedingly difficult because it requires states to give unprecedented access to their military facilities and software. It is unlikely that any military would be willing to provide such access due to the risk of cyber espionage. An adversary that gains access to an autonomous weapon’s software could potentially train it so that the weapon would not recognize enemy targets. Even worse, they could trick the weapon into attacking friendly forces or civilians.

The inability to observe a weapon’s software would make any autonomous weapons ban a risky proposition for the states entering into it. States could easily cheat the treaty regime by developing autonomous software for weapon systems that are normally manned. Again, drones exemplify this problem. A state could claim that all its drones are flown by humans while secretly developing an AI that could also fly the drone. It would be impossible for observers to tell who or what was flying the drone.

Given the ease of cheating, an autonomous weapons ban would inevitably give rise to highly secretive autonomous weapons programmes. These programmes would be significantly more dangerous than current autonomous weapons programmes because their secrecy would inevitably result in the weapons being tested less than an unclassified weapons programme. In turn, it is more likely that a secret autonomous weapon might behave unexpectedly when introduced to an actual battlefield. Rather perversely, an autonomous weapons ban may make the risk of a catastrophic loss of control of an autonomous weapon higher.

Instead of a ban, states should rigorously and publicly test any autonomous weapon system before it is deployed into actual combat. Moreover, states should cooperate with each other to establish general guidelines for how autonomous weapons should interact with each other. These guidelines would help ensure that there is no unexpected escalation resulting from the interaction of two states’ autonomous weapons. Developing such guidelines will not be easy. Yet, the rules that military aircraft and ships follow when they encounter foreign forces are precedence for such guidelines. With a bit of effort and transparency, we can ensure autonomous weapons never escape human control.