Australian Outlook

In this section

Regulating the Third Revolution in Warfare

18 Oct 2018
By Professor Toby Walsh FAAS
Professor Toby Walsh at the 2018 AIIA National Conference on 15 October (Credit: Lauren Skinner, former AIIA intern)

The world will be a much worse place if, in 20 years’ time, lethal autonomous weapons are commonplace and there are no laws about these weapons.

Three years ago, I helped organise an open letter which was signed by thousands of my colleagues and other experts in AI, warning of an arms race to develop what the media call “killer robots” but the UN call “lethal autonomous weapons” or LAWS. That arms race has now started.

The problem with calling them killer robots is that this conjures up a picture of Terminator and technologies that are a long way away. But it is not Terminator that worries me or indeed thousands of my colleagues. It is much simpler technologies that are, depending on your perspective, at best or at worst less than a decade away. It is stupid AI that I fear currently. We’ll be giving machines that are not sufficiently capable the right to make life or death decisions.

Take a Predator drone. This is a semi-autonomous weapon. It can fly itself much of the time. However, there is still a solver in overall control. And importantly, it is still a soldier who makes the final life-or-death decision to fire one of its Hellfire missiles.

But it is a small technical step to replace that soldier with a computer. Indeed, it is technically possible today. And once we build such simple autonomous weapons, there will be an arms race to develop more and more sophisticated versions.

The world will be a much worse place if, in twenty years’ time, such lethal autonomous weapons are common place and there are no laws about LAWS. This will be a terrible development in warfare. But it is not inevitable. In fact, we get to choose in the next few years whether we go down this particular road.

The attractions of autonomous weapons to the military are obvious. The weakest link in a Predator drone is the radio link back to base. Indeed, drones have been sabotaged by jamming their radio link. So if you can have the drone fly, track and target itself, you have a much more robust weapon.

A fully-autonomous drone also lets you dispense with a lot of expensive drone pilots. The United States Air Force could be renamed the United States Drone Force. It has more drone pilots than pilots of any other type of plane. By 2062, it won’t be just more drone pilots than pilots of any other type of plane, but more drone pilots than all other pilots put together. And whilst those drone pilots aren’t risking their lives on combat missions, they suffer post-traumatic stress disorder at similar rates to the rest of the air force’s pilots.

Autonomous weapons offer many other operational advantages. They don’t need to be fed or paid. They will fight 24/7. They will have super-human accuracy and reflexes. They will not need evacuating from the battlefield. They will obey every order to the letter. They will not commit atrocities or violate international humanitarian law. They would be perfect soldiers, sailors and pilots.

Strategically, autonomous weapons are also a military dream. They let a military scale their operations unhindered by manpower constraints. One programmer can command hundreds, even thousands, of autonomous weapons. This will industrialise warfare. Autonomous weapons will greatly increase strategic options. They will take humans out of harm’s way, opening up the opportunity to take on the riskiest of missions.

There are, however, many reasons why this military dream will have become a nightmare by 2062. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand over the decision of whether someone should live to a machine. Certainly today, machines have no emotions, compassion or empathy. Are machines then fit to decide who lives and who dies?

Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. In my view, one of the strongest reasons for a ban is that they will revolutionise warfare. In fact, it has been called the third revolution in warfare.

The first revolution was the invention of gun powder by the Chinese. The second was the invention of nuclear weapons by the United States. Lethal autonomous weapons will be the third revolution. Each was a step change in the speed and efficiency with which we could kill.

Autonomous weapons will be weapons of mass destruction. Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them.

Now just one programmer could control hundreds or even thousands of weapons. Like every other weapon of mass destruction before it, like chemical weapons, biological weapons and nuclear weapons, we will need to ban such weapons.

Lethal autonomous weapons are more troubling, in some respects, than nuclear weapons. To build a nuclear bomb requires technical sophistication. You need the resources of a nation-state and access to fissile material. You need some skilled physicists and engineers. Nuclear weapons have not, as a result, proliferated greatly. Autonomous weapons will require none of this.

Lethal autonomous weapons will be perfect weapons of terror. Terrorists and rogue states will have no qualms about turning them on civilians. They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.

There are some who claim that robots can be more ethical than human soldiers. It is, in my view, the most interesting and challenging argument for autonomous weapons. But it ignores that we don’t know today how to build autonomous weapons that will follow international humanitarian law.

I expect that we will eventually work out how to build ethical robots. However, we won’t be able to stop such weapons from being hacked to behave in unethical ways. And at the strategic level, lethal autonomous weapons pose new threats that might destabilise current stand-offs like that between North and South Korea. They threaten to upset the current balance of military power. You would no longer need to be an economic super-power to maintain a large and deadly army. It would only take a modest bank balance to have a powerful army of lethal autonomous weapons.

This doesn’t mean that lethal autonomous weapons can’t be banned. Chemical weapons are cheap and easy to produce, but have been banned. And we don’t need to develop autonomous weapons as a deterrent against those who might ignore a ban. We don’t develop chemical weapons to deter those who might sometimes use chemical weapons. We already have plenty of deterrents – military, economic and diplomatic – with which to deter those who choose to ignore international treaties.

So how do we begin the process of regulating lethal autonomous weapons? We must first decide that it is unacceptable to use them to kill people.

Professor Toby Walsh FAAS is the Scientia Professor of Artificial Intelligence at UNSW. He was a speaker at the 2018 AIIA National Conference on 15 October.

This is an edited extract of Toby’s book titled ‘2062: The World that AI Made’ published by Black Inc. in September 2018.

This article is published under a Creative Commons Licence and may be republished with attribution.