How the Australian Federal Government Is Actually Using AI (Beyond the Press Releases)


The Australian government has been publishing AI strategies, frameworks, and principles for years. But what’s actually deployed? What AI systems are making decisions that affect Australians right now? The answers are more extensive than most people realise.

Services Australia: The Biggest AI User You Interact With

Services Australia, which administers Centrelink, Medicare, and other social services, is the government’s most active AI user. And for good reason: they process millions of transactions daily across dozens of programs.

Their AI deployments include fraud detection systems that flag suspicious claims patterns, document processing AI that extracts information from uploaded documents, and customer routing systems that direct inquiries to appropriate service channels.

The fraud detection is particularly interesting and controversial. After the Robodebt disaster, which used automated income matching (not AI in the modern sense, but an automated decision system), Services Australia has been more careful about automated compliance. Current AI systems flag potential issues for human review rather than automatically issuing debts. That’s the right approach, but it means the AI is only as good as the human review process behind it.

The document processing AI is less controversial and genuinely helpful. Australians uploading identity documents, medical certificates, and other paperwork get faster processing because AI extracts key information and routes documents to the right teams. It’s the kind of unglamorous automation that improves everyone’s experience.

The ATO’s Quiet AI Program

The Australian Taxation Office has been deploying AI for longer than most people realise. Their compliance and risk assessment systems use machine learning to identify tax returns that warrant closer examination.

The sophistication is increasing. Current systems analyse patterns across millions of returns simultaneously, identifying anomalies that would be invisible to human auditors examining returns individually. The ATO reports that AI-assisted compliance activities have identified billions in additional revenue, though the exact attribution to AI versus traditional methods is fuzzy.

What’s newer is the ATO’s use of AI for proactive compliance support. Rather than just catching problems after the fact, they’re building systems that identify taxpayers who are likely to make errors and provide targeted guidance before returns are lodged. It’s a genuinely helpful use of AI that benefits both the government and taxpayers.

Defence and Intelligence

The Department of Defence is investing heavily in AI, though specific deployments are predictably opaque. What’s publicly known includes: AI for processing satellite imagery, natural language processing for intelligence analysis, and autonomous systems for surveillance and reconnaissance.

The Defence Innovation Hub has funded dozens of AI projects through Australian companies. Several have progressed from research to operational testing. The AUKUS partnership adds another dimension, with AI collaboration between Australia, the UK, and the US creating opportunities for Australian defence AI companies.

The ethical dimensions are significant. Autonomous weapons systems are not publicly on Australia’s agenda, but the line between AI-assisted decision support and AI-driven decision-making in military contexts is not always clear. The Defence AI Centre is supposed to provide governance, but the details of that governance are not publicly available.

The Digital Transformation Agency’s Role

The DTA has been pushing for AI adoption across government while trying to establish guardrails. Their AI Assurance Framework provides guidance for government agencies considering AI deployment, covering risk assessment, bias testing, and transparency requirements.

The challenge is that DTA guidance is advisory, not mandatory. Individual agencies retain significant autonomy over technology decisions. This means AI governance varies dramatically across the public service. Some agencies have robust assessment processes. Others are deploying AI with minimal oversight.

The forthcoming mandatory AI guardrails for high-risk government AI should address this inconsistency. The question is whether the resources will follow the mandates. Government agencies with tight budgets and competing priorities may struggle to implement comprehensive AI governance without additional funding.

The State Government Picture

State governments are active AI users too, though capabilities vary significantly.

New South Wales has been the most aggressive, with AI deployments across transport (traffic management and public transport optimisation), health (clinical decision support in public hospitals), and policing (controversial predictive policing trials that were scaled back after public concern).

Victoria has focused AI investment on health and emergency services. Their ambulance dispatch AI, which uses historical data to predict demand and pre-position resources, has reportedly improved response times.

Queensland is investing in AI for environmental monitoring, particularly in the Great Barrier Reef Marine Park Authority’s work on reef health assessment. Computer vision systems analysing underwater imagery to monitor coral health at scale is a genuinely innovative application.

The Transparency Problem

Here’s what concerns me. Despite the government’s stated commitment to transparent AI, it’s remarkably difficult to get comprehensive information about which AI systems are deployed, what decisions they influence, and what oversight mechanisms are in place.

There’s no public register of government AI systems. Freedom of information requests for AI-related documents are often heavily redacted. Parliamentary questions about specific AI deployments frequently receive vague answers.

This isn’t good enough. The government should maintain and publish a register of all AI systems used in decisions that affect individuals, including their purpose, the data they use, their known limitations, and the oversight mechanisms in place.

Australians deserve to know when AI is involved in decisions about their tax affairs, social services, policing, and government interactions. Transparency isn’t just an ethical principle. It’s a prerequisite for accountability.