By Samuel Hagood, Fall 2023.
Figure 1: By studying the Vietnam War, John Boyd developed the OODA decision-making loop in the early 1970s. Today, businesses, law firms, and militaries all use the OODA cycle to achieve faster and more effective results.[1]
U.S. Air Force strategist John Boyd pioneered the theory that a combatant is always doing one of four things: observing, orienting, deciding, or acting. We’ve always had our eyes to observe; now we have night-vision goggles and satellites. We’ve always had plans that orient us; now we have rules of engagement and computer simulations. We’ve had swords, cannons, muskets and missiles with which to act, but we’ve never before created instruments of war that decide when and whom to fight. [1] Fully autonomous weapons systems are the first.
Defined by their ability to operate without any human control, fully autonomous weapons are attractive to modern militaries because they aren’t restrained by human limits like sleep. They cost less to operate than manned systems, and they keep troops off dangerous battlefields. These advantages blind nations to the threat autonomous weapons pose to world peace and human rights. A new arms race lurks in our future, one that will ride the wave of the greater societal shift towards an artificially intelligent world. That arms race should be called off before it ever begins. The United Nations should ban the development and use of fully autonomous weapons systems worldwide, because warfare will expose critical weaknesses in the artificial intelligence powering autonomous weapons and those weaknesses will deteriorate global order.
Autonomous weapons are not widely used at the moment, but they do exist. Azerbaijan fired hundreds of Israel-supplied Harpy drones in its war against Armenia three years ago. Harpy drones are autonomous weapons, so after they were fired, they waited, loitering in the sky until they identified targets. Then they slammed their 44 pounds of explosives into the ranks of Armenian fighters.[2] The Russian army is developing a fleet of autonomous ground vehicles ranging in size from an ATV to a battle tank.[1] In 2020, the U.S. spent an estimated 35.1 billion dollars researching autonomy in warfare so as not to fall behind China and Russia. [3]
So how exactly do these systems function? The AI behind an autonomous weapon is known as a neural network, or neural net. Like a human brain, a neural network learns to recognize images through experience. Computer scientists show the neural network millions of images, and the network builds its own understanding of the world through trial and error. Today’s neural networks are extremely powerful, performing as well or better than humans on object recognition tests, and the technology will only continue to improve.[4] But that may not be a good thing.
Figure 2: A U.S. autonomous vehicle known as “Origin” performs maneuvers in desert terrain on August, 25th 2020.[8]
Autonomous weapons will be an unstable and potentially inflammatory factor in global balances of power and crises given the technological risks. In 2017, the U.S. Department of Defense hired a team of experts to research the implications of neural networks for warfare. After an exhaustive study, the group explained that, “[the sheer magnitude of the system]… makes it impossible to really understand exactly how the system does what it does …. [I]t is not clear that the existing AI…is immediately amenable to any sort of … validation and verification.”[5] Fully testing and evaluating weapons before they see action is a reasonable requirement already in place in most militaries, but adequately testing neural networks is unrealistic. Neural networks are “black boxes,” defying human attempts to understand why they come to the conclusions they do. Their successes and failures result from a way of processing the world entirely different from, even alien to, our own.[6] For example, specially altered images can fool neural networks. Known as adversarial images, some appear simply as static, while others look like traditional camouflage. They can all trick neural networks into confidently identifying a minivan as a tank or a white shirt as a suicide vest. One of the most sinister aspects of neural networks is not that they will make mistakes, because they will make very few, but that we won’t be able to learn from the mistakes they do make.[1] Cases of malfunction, of needless death, will go unsolved. Despite these uncertainties, autonomous weapons may be too tempting for countries to ignore without a ban on their development and use.
This ban becomes all the more necessary as the AI in question becomes more proficient. The faster the AI can act, the less warning human operators will have to correct its mistakes. In 2010, stock trading algorithms caused a free fall across the entire U.S. stock market. From 2:32 PM to 3:00 PM on May 6th, in a scare known as the Flash Crash, the Dow lost nearly 10% of its value to microsecond interactions between these algorithms.[1] In war, speed is strength, and autonomous weapons operate at similar speeds to their stockbroker cousins. But if autonomous weapons interact incorrectly with any number of factors present in the modern warzone, like international law, weather conditions, or the opposing forces’ mistakes, they could initiate “flash wars” in seconds. Mere weapons will not hesitate to begin conflicts. This inherent instability will be further compounded if both sides use autonomous weapons.[7] For those interested in peace, autonomous weapons’ speed may not be their greatest strength, but, in fact, their greatest weakness.
In free nations, autonomous weapons pose the risk of concentrating the ability to wage war in the hands of a select few, subverting democracy. Less human boots on the ground is undeniably an attractive prospect. Fathers could stay home with their children, sons with their mothers, while robots protect the nation. Technology will have triumphed … or not. One of the greatest counterweights to the undertaking of war in democratic states is the public fear of its consequences. This hesitation to go to battle heightens the quality and lessens the quantity of the wars nations fight. Ominously, autonomous weapons remove this hurdle, inviting countries to reckless courses of action.[2] And we won’t feel the effects of war until it arrives at our doorsteps. Autonomous weapons will invite disconnects between a country’s foreign policy and the conscience of its people, among other grave threats to world peace.
Given its destructive power and speed, one slip in the decision-making of an autonomous weapon, one typo buried deep in its code, could be disastrous. Autonomous weapons are our creations. They are our Frankensteins, and our pens will write their stories. We should end their horror story before it ever begins. We should sign a ban and put down our pens. As we build the future, artificial intelligence should come in peace, not in war.
Figure 3: An MQ-9 Reaper sits on the flight line as remotely piloted aircraft crews wait for the fog to clear during Combat Hammer Nov. 6 at Duke Field, Fla. Unmanned platforms like the MQ-9 may be among the first to boast full autonomy in combat operations. [9]
[1] Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. W.W. Norton & Company, 2018.
[2] Anzarouth, Matthew. “Robots That Kill: The Case for Banning Lethal Autonomous Weapon Systems.” Harvard Political Review, Harvard University, 2 Dec. 2021, https://harvardpolitics.com/robots-that-kill-the-case-for-banning-lethal-autonomous-weapon-systems/.
[3] Konaev, Margarita. “U.S. Military Investments in Autonomy and AI Budgetary Assessment – CSET.” Center for Security and Emerging Technology, Georgetown University, Oct. 2020, https://cset.georgetown.edu/wp-content/uploads/CSET-U.S.-Military-Investments-in-Autonomy-and-AI-A-Budgetary-Assessment.pdf.
[4] Kaiming He et. al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.”
[5] JASON, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” The Mitre Corporation, January 2017.
[6] “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” Evolving Artificial Intelligence National Laboratory, University of Wyoming, http://www.evolvingai.org/fooling.
[7] “Clip of the Month: Ethical Implications of Autonomous Weapons, with Paul Scharre.” Youtube, uploaded by Carnegie Council for Ethics in International Affairs, 29 May 2018, https://www.youtube.com/watch?v=spOSyIjVNyk
[8 (Figure 2)] Fuentes, Osvaldo. A U.S. Army autonomous weapons system known as “Origin”, maneuvers through desert terrain as weapons testing commences during Project Convergence 20. 25 Aug. 2020. Picryl.Com, Defense Visual Information Distribution Service, https://picryl.com/media/a-us-army-autonomous-weapons-system-known-as-origin-maneuvers-through-desert-8c4918https://picryl.com/media/a-us-army-autonomous-weapons-system-known-as-origin-maneuvers-through-desert-8c4918. Accessed 1 Dec. 2023.
[9 (Figure 3)] Tucker, Patrick. “The Reaper UAV Is Getting Its Own Drone Swarm.” Defense One, Defense One, 7 Mar. 2023, www.defenseone.com/technology/2023/03/reaper-uav-getting-its-own-drone-swarm/383676/.