"Killer AI" has been a popular motif in movies and sci-fi for decades--just think of movies like Wargames, the Terminator series, and even the latest Mission Impossible movie. In some way or other, an artificial intelligence system becomes self-aware, decides it is smarter than its human creators, and then decides to take over. The stunning progress of AI technologies in this past year has only further increased the level of concern.
Is this concern justified, or is it just overhyped paranoia? Doomsday scenarios drive a lot of page views and sell a lot of movie tickets, after all!
I'd argue both--and more boldly and specifically, I predict that within two years, we will see the first instance of an AI-enhanced weapon making a "kill" decision autonomously.
This is a dramatic prediction, to be sure, but the forces pushing us in this direction are already happening--namely, the war in Ukraine to repel the Russian invasion.
A recent edition of the Economist magazine has a thoughtful and in-depth series of articles about the ongoing war. This article, in particular, is worth reading: https://www.economist.com/special-report/2023/07/03/the-latest-in-the-battle-of-jamming-with-electronic-beams. For those of you who are not Economist subscribers, here is the summary.
Amongst the weaponry supplied to Ukraine was an artillery system called Excalibur. Excalibur is a system of precision, GPS-guided artillery shells that vastly increase fire accuracy. This increased accuracy, in turn, makes a dramatic difference in destroying enemy tanks and other fortifications. Excaliber and similar high-tech Western military gear are credited with significantly helping the Ukrainians defend their country.
However, by March of 2023, the Ukrainians found that the accuracy of Excalibur shells started dropping dramatically--and it wasn't just these artillery shells that were failing. Drones and other weaponry relying on GPS were similarly having trouble. The culprit was GPS jamming. The Russians had deployed electronic warfare systems to the front, including GPS jammers.
Of course, introducing GPS jammers invited a counter-response by Ukraine and its Western supporters. GPS jammers, by definition, emit powerful radio signals, so they become obvious targets in their own right. While many of the drones deployed in Ukraine use civilian-grade GPS, the US military uses a more jam-resistant form of GPS known as "M-code." But M-code has its own set of issues; ultimately, it is not wholly jam-proof either. Thus, simply providing Ukraine with US military-grade GPS systems is not sufficient.
Thus, the most logical next step is to give these weapon systems more onboard sensors and intelligence, so the artillery shell or drone itself can make the final guidance corrections to hit their targets accurately. Thanks to the phenomenal advances in low power and cheap computing (think Raspberry Pi's, for example) and AI advances, it is possible to equip a drone or artillery shell with an onboard AI that can use vision recognition and other technologies to guide itself to targets. Similar technology is already widely used in many civilian scenarios--think of the self-driving features in Tesla, Mercedes, and other cars. Many consumer drones now have AI-powered autonomous features, like obstacle avoidance.
The bottom line: there are widely available AI technologies now that could be employed to maintain or increase weapon accuracy could be maintained, even in the presence of GPS jamming. Of course, it's not just guidance systems that could benefit from AI--AI could impact nearly aspect of warfare, from guidance to sensors to logistics to battlefield coordination: https://www.wired.com/story/helsing-ai-military-defense-tech/
These advances are very logical and eminently feasible. They could make a big difference in ending the war more quickly. Given the immense human suffering caused by the war, it's hard to argue that Ukrainians should not do what they can to get to victory and end the war more quickly.
Therein lies the dilemma.
For the entirety of humanity's history, war has been fought by humans, and importantly, the decision to kill another person, even in warfare, was made by a human. The specific technologies have changed over time, from clubs to bows and arrows to now GPS-guided artillery shells. But ultimately, there was a human in the loop making the decision, and that human ultimately is accountable (think of the war crimes trials at the Hague, for instance).
Introducing AI puts us on a very slippery slope where humans are NOT making those decisions. An AI could very literally be making the final decision of who to kill and who not to kill.
Some of you may be saying at this point: "But wait, Alex, the AI is only guiding the artillery shell or drone. The decision to fire was still made by a human".
At first glance, that analysis is entirely correct. Plenty of other technologies are used in guidance systems besides AI; why is AI special?
The specialness lies in how an AI system will make decisions. A traditional GPS guidance system is fairly straightforward: maintain course such and such to a specific destination. But an AI system will try to figure out something much more complex: that set of pixels in its video feed is a tank, and that other set of pixels is not a tank, so aim towards the tank.
Let's look at this from the perspective of someone trying to defeat an AI-augmented system. Just as the Russians introduced GPS jammers into the Ukraine war, any adversary facing AI-guided weapons is going to ask the question: what can I do to counter or foil the AI? Can I trick it into not realizing my tank is a tank?
Short answer: yes, this is entirely possible. For example, for years now, privacy advocates have been researching and exploring ways to defeat vision and face recognition systems. In 2010, for example, the 'CV Dazzle' project used patterns of makeup to foil vision recognition systems of the time. https://adam.harvey.studio/cvdazzle
Thus, if I know there are AI-guided artillery shells or armed drones aimed in my direction, what if I put a tarp over my tank that looks like a school bus? Surely the AI would not select a school bus for a target?
But if that works, what about the reverse? Unfortunately, there are unethical, amoral, and truly evil people in the world. What happens when such a person paints a picture of a tank on top of a school bus? While we would all like to live in a world where such atrocities do not happen, they still happen. How is an AI supposed to decide what is a decoy versus what is real? It gets messy fast--what if that same amoral commander surrounds his military equipment with hostages? This would be a complex situation for any human leader, even without the involvement of AI.
Bringing this all together, here is a hypothetical but likely scenario at some point in the not-too-distant future: through drone footage (interpreted and flagged by an AI), a Ukrainian commander spots a convoy of Russian tanks and GPS jamming equipment moving through a residential area in a city. Confident in the accuracy of his GPS and AI-guided munitions, he orders a strike with AI-guided explosive drones.
As those drones fly through their targets, the Russians turn on their GPS jamming equipment; the drones instantly switch to AI guidance mode. Now the final guidance computations are being made by the AI.
The AI sees the tanks amidst the everyday cars, trucks, ambulances, and school buses of civilian life. Target acquired.
Or was it?
Who did the AI decide to kill, and why?
I am not going to pretend there are easy answers to this dilemma. Undoubtedly, AI can improve the lethality and effectiveness of warfighting. The pressures and real-life human suffering of the large-scale war in Ukraine will inexorably drive rapid adoption of these technologies.
But much like the decision to use the first nuclear weapons in World War II, putting the AI as the final decision maker of "who to kill" will open a Pandora's box from which there is no easy return. While I think (and hope!) the Terminator-style "AI takes over the world" scenarios are more far-fetched and unlikely, I am far more disturbed at the prospect of turning killing decisions over to a computer algorithm.
Maybe we can rationalize AI-augmented munition guidance for artillery shells and drones en route to a target. But what's next? Why not a drone that loiters over enemy territory, waiting to spot an otherwise elusive target?
It's a very slippery slope...