Artificial Intelligence in Modern Warfare: Ethics, Autonomy, and Global Security

Integrating artificial intelligence (AI) and autonomous systems into modern warfare reshapes global security and raises urgent ethical, legal, and strategic questions. As autonomous drones, surveillance technologies, and AI-driven decision-making tools become more prevalent in military operations, the absence of human oversight and the risks of unintended consequences make the case for robust international regulation increasingly compelling.

The Australian Institute of International Affairs (AIIA) hosted a seminar with Dr. Peter Layton, Visiting Fellow at the Griffith Asia Institute and expert in grand strategy, defence policy, and emerging technologies to address these challenges. Dr. Layton provided an in-depth analysis of the growing use of Autonomous Weapons Systems (AWS), their implications for International Humanitarian Law (IHL), and the urgent need for ethical governance.

Dr. Layton stressed that AWS are no longer hypothetical but a present reality. Over 90 countries—including the United States, China, and Russia—are actively developing or deploying autonomous weapons, including Lethal Autonomous Weapons Systems (LAWS). These systems can identify and strike targets without human input. While they may improve battlefield efficiency and reduce human casualties, they raise profound ethical dilemmas, particularly concerning distinction and proportionality, core principles of IHL. Under IHL, the principle of distinction requires combatants to differentiate between legitimate military targets and civilians. The principle of proportionality prohibits attacks if the expected civilian harm would be excessive compared to the direct military advantage gained.

A key concern is that autonomous systems lack the capacity for nuanced ethical judgment. They cannot reliably assess whether to use force appropriately in complex, rapidly changing situations. This increases the risk of misidentifying civilians as combatants, an outcome that undermines the basic humanitarian norms of warfare.

Dr. Layton also drew attention to the accountability gap in autonomous warfare. AI systems operate as “black boxes,” making it difficult to trace how and why decisions are made. If an autonomous weapon violates international law, it’s often unclear who should be held responsible: the developer, the commander, the programmer, or the state itself. This lack of clarity weakens the foundations of justice and deterrence.

Without clear accountability, states may feel emboldened to develop and deploy AWS without sufficient safeguards. Dr. Layton warned that this could destabilise existing legal frameworks and increase the likelihood of unlawful actions going unchecked.

The issue of algorithmic bias further complicates the ethical landscape. AI systems are only as neutral as the data on which they are trained. Dr. Layton explained that biases embedded in training data can result in discriminatory outcomes, especially in high-stakes conflict zones where AI is expected to make life-or-death decisions. These systemic flaws can erode trust and lead to severe human rights violations.

Another alarming risk is algorithmic escalation. AI systems function based on pre-set rules and lack contextual understanding. Dr. Layton cautioned that such systems might unintentionally misinterpret ambiguous situations and escalate conflicts. In tense environments, autonomous systems could react to perceived threats with disproportionate force, potentially sparking unplanned retaliation or war. This danger is magnified in high-stakes scenarios where rapid decisions are required.

Dr. Layton cited real-world examples of autonomous technologies acting on limited data, selecting targets based solely on patterns. Without human judgment, such decisions can have devastating consequences. The increasing autonomy of these systems introduces an unpredictable element to warfare that challenges the stability of traditional strategic deterrence.

He further warned that AWS might deepen geopolitical inequality. Advanced military AI gives technologically dominant states an asymmetric advantage, potentially marginalising less-equipped countries. This could trigger autonomous arms races, with states competing to develop increasingly advanced AWS in fear of falling behind.

Yet, amid these concerns, Dr. Layton also identified a path forward. He called for legally binding international treaties to limit the development and deployment of AWS. Drawing on diplomatic tools and multilateral engagement, he argued that international cooperation is essential to setting enforceable boundaries and avoiding an unregulated AI arms race.

He suggested that maintaining meaningful human control over all uses of force is a crucial safeguard. Dr. Layton advocated for “human-in-the-loop” and “human-on-the-loop” models, where humans retain oversight and decision-making authority in critical moments. These models can help uphold ethical standards, even in technologically complex scenarios.

The seminar concluded with a call to action for policymakers, technologists, legal experts, and civil society. Dr. Layton underscored that the future of AI in warfare is not inevitable. Choices made now—about regulation, ethical design, and multilateral governance—will determine whether these technologies contribute to peace or undermine it.

He closed on a cautiously optimistic note: while the risks of autonomous warfare are real and growing, they are not insurmountable. With thoughtful governance, robust legal frameworks, and sustained global cooperation, the international community can mitigate the dangers of AWS and ensure that future conflicts remain governed by principles of humanity, accountability, and restraint.

Edited by Deborah Bouchez


Written by Kiseki Fujisawa

Currently, in her final year of a Juris Doctor at Griffith University, Kiseki Fujisawa is passionate about international law and global affairs. With a strong interest in the intersection of law and diplomacy, Kiseki aspires to promote justice and peace worldwide through legal and diplomatic efforts.

Get in-depth analysis sent straight to your inbox

Subscribe to the weekly Australian Outlook mailout