Australia’s productivity debate is stuck on hours and wages. The bigger lever is regulatory clarity for AI—paired with a practical plan to scale adoption across the economy.
This week’s Economic Reform Roundtable is the moment to move from discussion papers to delivery. Recent headlines risk missing the point. A four-day week makes for lively debate, but it is AI diffusion—into factories, clinics, classrooms, and back offices—that will determine whether Australia gets more output from the same hours worked. Aaron Violi MP is right about one thing that should be uncontroversial: businesses need a map. Rules that are predictable, technology-neutral, and risk-based reduce the “policy risk premium” that keeps boards from green-lighting AI investments in skills, data, and compute. The question isn’t whether to act; it’s how fast Canberra can give businesses a usable map.
The problem isn’t a lack of ambition—it’s fragmentation
Australia’s AI activity is real and growing, but it is scattered. Agencies publish their own AI roadmaps, while firms cite uncertainty, talent bottlenecks, and access to compute as the reasons adoption lags. In this environment, even capable companies sit on their hands. A national strategy that aligns regulation, procurement, skills, and infrastructure is the cheapest productivity policy we should have.
Competing instincts in Canberra are understandable: one camp emphasises productivity and proportionality; another stresses worker protections and institutional voice. A good strategy reconciles both. Risk-tiered regulation protects people where harms are plausible and proven, while letting low-risk uses—coding assistants, document classification, demand forecasting—scale quickly. For a start, the labour side commits to supporting workers through the AI transition support: funded reskilling and upskilling for at-risk roles, portable benefits for contractors, and a standing tripartite forum to monitor workplace impacts. That is not “light touch”; it is smart touch prioritising both productivity and fairness.
Defence innovation matters, but productivity gains come from diffusion in civilian sectors: mining, agriculture, logistics, healthcare, education, and services. If we treat AUKUS as our “tech strategy,” we will over-index on export controls and under-invest in adoption. The roundtable should therefore commit to a civilian AI delivery plan with dates, agencies, and budgets attached—and a public dashboard to track take-up and measured gains.
Governance, adoption and selective engagement
Australia isn’t a first-tier AI power, but it has a distinctive advantage: rules that travel. The 2025 Stanford AI Index places Australia mid-pack on AI—28th of 36 nations on its Global AI Vibrancy weighted index and 30th on economic competitiveness—evidence of underpowered commercialisation. By contrast, Australia performs better on policy and governance. On broader digital foundations, it ranks 15th of 67 in IMD’s 2024 Digital Competitiveness Ranking. Independent analysis of Stanford’s tool suggests Australia sits around the top ten on the Policy & Governance sub-index on a per-capita basis (but lags on economic competitiveness), underscoring that its rules and institutions are running ahead of our investment and diffusion. Australia has also adopted ISO/IEC 42001:2023 and aligned with the NIST AI Risk Management Framework—interoperable without importing the EU’s process load.
The task now is to weld that governance advantage to an economic one: plug compute and talent gaps and accelerate adoption in trade-exposed, high-productivity sectors. This advantage positions Australia to be a standards-setter and convenor in the Indo-Pacific—helping neighbours build capacity, pushing for interoperable rules, and lowering cross-border compliance costs for firms operating here.
Australia cannot outspend the United States or out-scale China. Its edge is credibility: dependable rules, testable safety, high-quality data, and professional services. That edge grows when the country stays open where it is safe and closed where it is not. A “selective engagement” posture—standards cooperation, green-tech optimisation, health analytics—paired with enforceable research security reduces risk while preserving learning channels that keep its capabilities current.
What success looks like
The map, in practice, is simple: risk-tiered rules, government acting as the lead customer, and targeted support for compute, data, and skills. Do that—and investment follows.
This isn’t a theoretical exercise in “future tech.” The global arms race is already reshaping capex, standards, and supply chains. The contrast between US frontier-model dominance (GPT-5-class systems) and China’s push to scale open-source models (such as DeepSeek) and applications is setting de facto rules and procurement preferences across markets we trade with. For Australia, staying vague is the costly choice.
If the roundtable wants productivity, commit to a short, doable program that converts intent into adoption:
- Legislate a risk-tiered AI law—fast
Pass a brief, technology-neutral act that (i) bans a narrow set of clearly high-risk uses, (ii) requires assurance for “material-risk” deployments in health, finance, and safety, and (iii) leaves low-risk uses to guidance and existing law. Name a single national coordinator across privacy, safety, and competition. Map guidance to international standards to avoid duplicate compliance. - Make government the first at-scale customer
Publish a dozen or so priority use cases (claims, benefits integrity, planning approvals, contact-centre augmentation), run time-boxed pilots with red-teaming and secure data access, then convert the winners into multi-year procurements with clear KPIs and audit trails. - Build the plumbing: compute, data, skills
Stand up an Australian compute & data facility giving SMEs metered access to secure compute, curated datasets and model evaluation. Pair it with accelerated depreciation for AI-enabling capex, modular micro-credentials through TAFE/universities, and a fast-track visa lane for AI engineers and product managers. - De-risk SME adoption
Launch an AI adoption voucher scheme via accountants and industry associations for discovery, data-readiness, and pilot build-outs. Provide baseline safety and privacy templates so small firms start compliant. - Set collaboration guardrails—especially with China
Replace ad-hoc bans with a rules-first framework: a whitelist of low-risk research domains; due diligence for higher-risk work; auditable data handling; standard IP templates; and rapid off-ramps when thresholds are exceeded.
None of this needs a moonshot—or culture wars. It needs clarity, consistency, and craft. Give businesses an actionable map, and the productivity story will write itself through thousands of confident decisions by firms that know where the cliff edges are—and where the track runs.
Dr. Marina Yue Zhang is an associate professor at the Australia-China Relations Institute, University of Technology Sydney (UTS: ACRI). Prior to this position, Marina worked for UNSW in Australia and Tsinghua University in China. Marina holds a bachelor’s degree in biological science from Peking University, an MBA and a PhD from Australian National University. Marina’s research interests cover China’s innovation policy and practice, latecomers’ catch-up, emerging and disruptive technologies, and network effects in digital transformation.
This article is published under a Creative Commons License and may be republished with attribution.