Australia’s defence debate is no longer centred primarily on ships, aircraft or missiles. It has now shifted the discussion to the software, data and AI systems embedded within them.
As artificial intelligence, autonomous systems and digital infrastructure move to the centre of modern warfare, the Australian Defence Force is being confronted with the challenge of preserving operational independence whilst critical digital infrastructure including software, cloud services and data pipelines underpinning ADF operations is built, updated or governed overseas.
For two decades, Australia has been a deeply embedded partner within the United States-led security architecture. This partnership has delivered real advantages: access to advanced military technologies and interoperable systems, intelligence cooperation few other countries can match, and platforms such as the F-35 that no mid-sized power could realistically develop or sustain alone. However, this partnership has also produced a less visible dependency; Australian forces today operate through foreign software, proprietary data systems and digital architectures that often cannot be independently audited, modified or governed within Australia.
From Hardware to Software: The New Meaning of Defence Sovereignty
Software-defined defence, where a platform’s effectiveness depends as much on its code, data feeds and algorithmic systems as its physical design, reshapes what dependence means. A platform may be Australian-operated, but the data pipelines feeding it and the software updates sustaining its performance may sit outside full national control. This is so with the F-35’s Mission Data Files, which are managed by the United States and cannot be independently modified by Australia. Restricted access, delayed updates or shifting priorities inside an allied government or private technology provider could narrow operational flexibility in a crisis, leaving Australia dependent on foreign decisions at a moment where strategic autonomy matters most. Owning the airframe is no longer enough.
The 2024 National Defence Strategy acknowledges this broader challenge by framing the transformation of the ADF into a leaner, more technologically integrated force capable of operating across multiple domains simultaneously as essential to national security. It also emphasises the need to develop sovereign capability alongside integrated targeting and advanced technologies including AI-enabled systems, cyber capabilities and autonomous platforms. The capacity to govern, audit and adapt the digital systems underpinning national defence has stopped being an industrial preference and is now being considered a national security requirement.
The MQ-28A Ghost Bat is a clear expression of this shift. Developed by Boeing Australia with the Royal Australian Air Force (RAAF), it is an Australian-designed uncrewed aircraft intended to support the integration of autonomous systems and AI into human-machine teaming. Boeing describes it as the first military aircraft designed, manufactured and flown in Australia in more than 50 years, while the RAAF calls it a pathfinder for autonomous teaming, where uncrewed aircraft operate alongside human-piloted jets, extending their reach and reducing risk to personnel. Its value lies not only in the aircraft itself, but in what the program represents: a domestic effort to build expertise in autonomy, AI integration and the systems engineering that future air combat will demand.
But platform programs will not deliver sovereignty on their own. Modern military effectiveness depends on control over the full stack: data pipelines, decision-support models, cloud services and the software ecosystems that connect them. The critical question being asked by policymakers and researchers alike is how much of this stack Australia can independently govern. The answer today is uncomfortable: Australia governs far less of this stake than its platform investments suggest, and the gap is widening as systems become more complex and more deeply embedded in allied digital ecosystems.
Governing the Algorithm: Autonomy, AI and What Comes Next
The question of whether Australia can maintain strategic independence matters more as Australian and allied defence research moves into brain-computer interfaces (technologies that allow soldiers to control systems directly through neural signals), autonomous teaming and AI-assisted intelligence analysis. These technologies remain immature, but their direction is clear. A University of Technology Sydney project involving the Australian Army has explored how brain-computer interface technology could help soldiers interact with robotic systems during tactical missions. Separately, the Australian Army’s Robotic and Autonomous Systems Strategy has identified human-machine teaming as a core element of ADF modernisation. These developments point toward a future in which computational and cognitive advantage will count for at least as much as the size of the fleet.
The implications extend beyond military capability into questions of governance and accountability. As AI systems become embedded in targeting decisions, procurement processes and operational planning, Australia needs sovereign oversight not simply to maintain effectiveness, but to ensure these systems remain aligned with Australian law and democratic values. A force that cannot audit or govern its own algorithms is one whose decision-making may ultimately be shaped by foreign priorities.
AUKUS Pillar II adds another layer to this dependency challenge. It concentrates advanced capability cooperation in areas such as artificial intelligence, cyber, quantum technologies and autonomous systems, giving Australia access to deeper collaboration with the United States and the United Kingdom. It also risks deepening long-term dependence on allied technological ecosystems. The right policy frame is not a binary choice between integration and isolation. It is managed interdependence: working closely with allies while retaining the technical capacity to govern, audit and adapt the systems Australia fields.
Australia’s Investment Strategy: National Security
In this environment, sovereign AI capability is best understood not as a niche technical concern but a foundation of national security, as essential to strategic autonomy and control over physical territory or critical infrastructure. Countries without meaningful control over their data infrastructure, software systems and AI architectures will face constraints on their strategic autonomy that no platform purchase can fix because the vulnerability lies not in what Australia owns but in who controls the systems that make those platforms effective.
Australia’s investment in sovereign defence technologies is ultimately a national security decision, not an industry policy one. Three arguments have driven this piece: that software-defined warfare has fundamentally changed what dependency means; that platform acquisition alone cannot deliver strategic autonomy when the digital infrastructure underpinning those platforms remains outside Australian control; and that alliance cooperation, however essential, must be structured to preserve, rather than gradually erode, the capacity to govern Australia’s own systems.
In the next generation of warfare, countries that cannot govern the algorithms underpinning their defence systems may ultimately struggle to govern their own strategic choices.
Muhammad Amir is a PhD researcher at Deakin University, specializing in international relations and security studies. His research focuses on peace processes, strategic competition, defence policy, and emerging technologies.
This article is published under a Creative Commons License and may be republished with attribution.