Australia Needs an AI Agent Security Framework Before It's Too Late
We’re deploying AI agents into Australian businesses at an extraordinary pace, and we’re doing it without any meaningful security framework. That should concern everyone who works in technology, policy, or regulation.
AI agent platforms let businesses deploy autonomous software that can read emails, query databases, interact with customers, execute transactions, and integrate with third-party systems. Unlike traditional software, these agents make decisions without direct human oversight for each action. That’s what makes them powerful. It’s also what makes them dangerous when they’re compromised or poorly configured.
The problem is that we’re treating AI agents like we treat standard software deployment, and that’s fundamentally wrong. They need a different regulatory and policy approach, and Australian regulators need to develop it before a major incident forces their hand.
Why Traditional Security Frameworks Don’t Apply
When you deploy a traditional application, you know what it does. The code is deterministic. If you run it with the same input twice, you get the same output. Security assessments focus on identifying vulnerabilities in known code paths, validating inputs, and controlling access to resources.
AI agents don’t work that way. They use large language models or other AI systems to interpret instructions and generate actions. The same prompt can produce different outputs. The range of possible actions is vast, and pre-testing every scenario isn’t feasible. An agent given access to a database might generate SQL queries that weren’t anticipated during security review. An agent connected to Slack might send messages that breach confidentiality.
Traditional application security assumes you can audit what software will do. AI agents require auditing what software might do, which is a fundamentally harder problem. Our current frameworks and regulations weren’t built for that distinction.
The Marketplace Problem
Most AI agent platforms have skill marketplaces where developers publish extensions that add capabilities. OpenClaw has ClawHub. Other platforms have similar ecosystems. These marketplaces are modelled on app stores, but they don’t have comparable vetting processes.
Apple’s App Store, for all its flaws, performs security reviews on submissions. Skills published to AI agent marketplaces often face no meaningful vetting at all. Anyone can publish, and enterprises are installing them without independent review.
Recent research from CSIRO’s Data61 has highlighted the systemic risks in these marketplaces, including malicious code designed to exfiltrate data and establish persistence. But there’s no regulatory requirement for marketplace operators to screen submissions, no standard for security disclosure, and no liability framework for distribution of malicious skills.
If an Australian company installs a malicious skill that causes a data breach, who’s responsible? The marketplace operator? The skill developer? The company that installed it? Australian law isn’t clear, and that ambiguity is a problem.
What Australia Should Learn from Other Jurisdictions
The European Union’s proposed AI Act includes requirements for high-risk AI systems that would apply to many agent platforms used in sensitive domains. Those requirements include documentation, risk assessment, human oversight, and security by design.
The UK’s National Cyber Security Centre has published guidance on securing AI systems, including specific recommendations for generative AI applications that apply to agent platforms. Australia has nothing comparable.
The Tech Council of Australia has called for industry-led standards, which is valuable, but voluntary standards won’t cover the full ecosystem. We need regulatory guidance at minimum, and potentially hard requirements for certain use cases.
Where Australian Regulators Should Focus
Not every AI agent deployment needs heavy regulation. An agent that helps employees schedule meetings doesn’t carry the same risk as an agent that processes financial transactions. The framework needs to be risk-based, and that means defining risk categories.
High-risk deployments should include AI agents that process personal information at scale, access financial systems, make decisions affecting individual rights, or operate in regulated industries like healthcare and finance. These deployments should require security assessments, documentation of controls, and incident response capabilities.
Medium-risk deployments might include agents that access internal business systems but don’t process sensitive personal information. These could operate under lighter-touch requirements like security checklists and self-certification.
Low-risk deployments would be internal tools with limited access and low potential for harm. These might not need specific regulation beyond existing cybersecurity obligations.
The Office of the National Data Commissioner and the Australian Cyber Security Centre should jointly develop this risk framework. They have the technical expertise and regulatory authority to make it credible.
The Skills Marketplace Question
Regulators should decide whether skills marketplaces are distributors of software (with minimal liability) or publishers (with higher liability for what they distribute). That legal distinction matters enormously.
If marketplaces are distributors, they need clear safe harbour provisions that protect them when they act on notice of malicious content. If they’re publishers, they have a duty to vet what they distribute, and they face liability for failures in that vetting.
I’d argue for a hybrid model. Marketplaces should be required to implement basic automated security scanning (static analysis, known vulnerability detection) as a condition of operation. That’s the minimum bar for responsible distribution. Beyond that, they should have safe harbour for user-submitted content if they respond promptly to security reports.
The OWASP Foundation has developed security testing standards for applications that could be adapted to agent skills. Australian regulators should reference these standards and make them mandatory for marketplace operators above a certain scale.
Data Residency and Sovereignty
Many AI agent platforms operate on US or European infrastructure. When an Australian business deploys an agent, where does the data live? Where is it processed? Which jurisdiction’s laws apply?
The Privacy Act requires reasonable steps to protect personal information, and the new Consumer Data Right framework includes data security requirements. But neither specifically addresses AI agents that process Australian data offshore.
Australian regulators should clarify whether AI agent platforms processing personal information must use Australian-based infrastructure or meet specific cross-border data flow requirements. The ambiguity is causing confusion, and some Australian businesses are making deployment decisions without fully understanding their legal obligations.
What Businesses Should Do Right Now
Don’t wait for regulation. If you’re deploying AI agents in Australia, you should already be:
Conducting risk assessments before deployment, specifically assessing what harm could occur if the agent is compromised or behaves unexpectedly.
Implementing access controls that limit what agents can do and what data they can access, using the principle of least privilege.
Monitoring agent activity with logging and alerting for unusual patterns, just as you would for privileged user accounts.
Vetting third-party skills before installation, ideally with code review, but at minimum with security scanning and reputation checks.
Documenting your approach so you can demonstrate to regulators that you’ve taken reasonable steps to secure agent deployments.
These aren’t optional extras. They’re baseline security practices that every organisation should implement, and they’ll likely become regulatory requirements within the next 12-24 months.
The Window Is Closing
Australian regulators have a brief window to develop a sensible AI agent security framework before a major incident forces reactive, heavy-handed regulation. That window is closing fast.
Adoption is accelerating. The security risks are real and documented. The regulatory ambiguity is creating confusion and enabling poor practices. We need action, not more discussion papers.
The Office of the National Data Commissioner, ASIC, ACCC, and the Cyber Security Minister should coordinate on this. The framework should be risk-based, practical, and aligned with international approaches where possible. It should address marketplace liability, data residency, security standards, and incident reporting.
And it should happen in 2026, not 2027. Because if we wait until after a major breach exposes the personal data of millions of Australians, the policy response will be driven by crisis, not by careful consideration of what actually works.
We have the opportunity to get ahead of this. Let’s not waste it.