Australian Outlook

In this section

Will the Weapons of Tomorrow be Nuclear or Digital?

12 Oct 2022
By Thomas McDonough and Jennifer Spindel

Artificial Intelligence is becoming an increasingly significant technology. This brings risks due to the unpredictability and unknowability of future systems.

Artificial intelligence, commonly referred to as AI, is a type of computer program that can automate tasks that would otherwise be performed by humans. While simple in concept, AI programs are capable of very complex thinking that often exceeds expectations and creates solutions to problems that a human mind would not consider. Their power combined with their unpredictability leads many experts to view AI as dangerous, drawing comparisons to technology as deadly as nuclear weapons. Can a computer program really be more dangerous than an atomic bomb? It depends.

AI is more accessible than nuclear technologies

While nuclear explosions are undoubtedly deadly, there are only a handful of nuclear weapons states. Nuclear weapons require access to radioactive elements which are expensive, closely regulated, and quickly draw the attention of other nations.

As with the nuclear arms race, states and non-state actors are accelerating research into AI applications, especially in the military realm. This could be for tasks as simple as analysing satellite data for changes over time, or for much more involved AI programs that conduct autonomous warfare. AI technology is far more accessible than nuclear technologies. AI could be made with an everyday laptop, desktop computer, or even on a smartphone. Internet access makes the process even simpler, as access to websites like Reddit and GitHub allow for crowdsourcing code.

AI is often compared to electricity – it is common and less resource-intensive, but enables a lot of other tools and actions. How do you regulate electricity? Regulating AI is similarly tricky. While there are nuclear-specific and dual-use regulations, the task seems much more daunting, if not impossible, for AI technologies as code can quickly spread before states or regulating authorities are even aware that it has been developed.

The unpredictably and unknowability of future AI systems

The true threat of AI is its unpredictability. A powerful machine learning algorithm can act in ways that humans cannot anticipate. This happened with the AI taught to play the game “Go,” when it made a move that humans had never before imagined. What happens when AI systems used in the military realm dream up scenarios or actions that have never occurred to humans?

AI stretching the bounds of human knowledge or capabilities is not as science fiction as it might sound. The algorithms for this are often modeled after human neural networks, processing inputs and procedurally weighing them to improve their accuracy. They adapt to the world and learn from their experiences, often in ways that humans cannot foresee or otherwise predict.

Even systems developed for good can be used for ill intent. This was the case with a pharmaceutical company that developed an AI to help it identify and create potential drugs for “orphan diseases” – diseases that are so rare drug companies rarely invest funds into curing or treating them. After changing a few ones and zeros, that very same algorithm became the creator of new kinds of deadly chemical weapons – drugs that excel in killing human beings. Similarly, an AI created to identify pastries was shown to be effective at detecting cancer cells. It is therefore not hard to imagine something like a self-driving car AI being repurposed for drone warfare.

Another form of unpredictability is the difficulties humans have in understanding how AI works. We have already seen accidents from weapons put into automated mode. The US Patriot Missile system shot down a US Navy jet in 2003 while on automated mode. The Pentagon attributed this, and other incidents, to “too much autonomy” given to the Patriot system. But who is to blame when AI makes a mistake? The developer? The user? Accountability is not a clear-cut decision.

And as the gap between end-user and developer increases, how many people will be able to repair their AI-powered systems in the field? Consider the difference between jumpstarting the battery in a gasoline-powered car and in a Prius or an all-electric vehicle. While many people know how to do the former, the latter is far more complex.

Most military members are not experts in nuclear science, but we at least know what a nuclear weapon is going to do each time it is deployed (explode). AI, based as it is on probabilistic decision-making, adds an uncertainty and unknowability to its use. There will be a significant shift in warfare if and when humans become mere users of technology, as opposed to decision-makers.

AI as a force enabler

AI, like electricity, has many uses, with more still to be discovered. Could Thomas Edison or Nikola Tesla have imagined that electricity would help create computers, which could be miniaturised and carried around in our pockets? What might AI of the future look like?

On the one hand, AI and machine learning algorithms can be force enablers and force multipliers that allow militaries and small forces to achieve great effect. This could be in the realm of cyber hacking, by using AI to find ways through firewalls and security. Or it could be using AI on drone technology to create killer robots. It could be applied to underwater vehicles to create autonomous submarines that can be an ever-present sea threat. The ease of use and general accessibility of AI could be a leveling force, creating less of a gap between the more powerful and least powerful, or at least allowing challenges to happen in new and interesting ways.

On the other hand, each development in AI will be met by a counter-development. AI is already being applied to undersea detection technologies that can be used to identify stealthy submarines as they patrol the oceans. AI can also be used to modulate and adjust firewalls and other cybersecurity measures to decrease the likelihood of breakthrough. Predictive AI technologies could, much like the movie Minority Report, be used to identify and respond to threats before they even happen.

So, what does all this mean about the dangers of artificial intelligence? There are a lot of unknowns, both in terms of how AI might be applied in the future, as well as what types of regulations or prohibitions might be developed. The world reacted strongly to the creation of nuclear weapons. The non-proliferation treaty remains one of the most widely ratified and respected arms treaties. Might there be a similar movement to regulate the development of AI, at least in the military domain?

Efforts are already underway to understand both military applications of AI technologies, and to limit some of the more extreme uses of this technology. For now, though, AI stands as a tool rather than a single technology with a clear use. It can be applied across domains and to multiple ends, making it both an exciting, and frightening, emerging technology.

Thomas McDonough is a graduate student and research assistant at the University of New Hampshire studying sustainable development, human progress, and soft power.

Jennifer Spindel is an assistant professor of political science at the University of New Hampshire working on international security, foreign policy, and alliances. Her Twitter call is @jenspindel.

This article is published under a Creative Commons License and may be republished with attribution.