Opinion: Can We Please Stop Calling Everything AI?
I’m declaring a personal war on the misuse of the term “AI.” Not because I’m a pedant (well, partly), but because the debasement of the term has real consequences for businesses trying to make informed technology decisions.
What AI Actually Is
At its core, AI refers to systems that can learn from data and make decisions or predictions that weren’t explicitly programmed. The key word is “learn.” A system that follows rules written by a programmer isn’t learning. A system that identifies patterns in data and improves its performance based on those patterns is learning.
Machine learning models that predict customer churn based on behaviour patterns? AI. Natural language processing that understands the meaning of text, not just matches keywords? AI. Computer vision that recognises objects in images? AI.
What AI Actually Isn’t
Here’s my growing list of things I’ve seen marketed as AI that aren’t.
Database queries. “Our AI analyses your sales data” often means “our system runs SQL queries and presents the results in a dashboard.” That’s database analytics. It’s useful. It’s not AI.
Rules engines. “AI-powered workflow automation” that follows if-then rules written by a human isn’t AI. It’s automation. Automation is great. But calling it AI is misleading.
Statistical calculations. Calculating an average, running a regression, or computing a correlation isn’t AI in any meaningful modern sense. Yes, linear regression is technically a machine learning technique. But calling a spreadsheet formula “AI” is stretching the definition past breaking point.
Keyword matching. Search engines that match keywords against a database aren’t AI. Even sophisticated keyword matching with synonym expansion and fuzzy matching isn’t AI unless there’s a learning component.
Pre-programmed responses. Chatbots that follow decision trees with pre-written responses aren’t AI. They’re interactive FAQ systems. Useful, sure. But the “intelligence” is the human who wrote the responses, not the system delivering them.
Manual processes with an AI label. The most egregious example: products marketed as AI-powered that actually have humans doing the work behind the interface. This isn’t just inaccurate. It’s fraud.
Why the Distinction Matters
“Who cares what we call it? If it works, it works.”
I hear this argument often. Here’s why it’s wrong.
Pricing. AI commands a premium. Products labelled as “AI-powered” charge more than equivalent products without the label. If the AI isn’t real, customers are paying a markup for marketing, not capability.
Expectations. When a business buys an “AI solution,” they expect the system to improve over time, handle novel situations, and provide intelligent responses. If the underlying technology is rules-based automation, it won’t do any of those things. The expectation gap creates dissatisfaction and distrust.
Investment decisions. Companies and investors allocate capital based on AI claims. If those claims are inflated, capital flows to the wrong places. Genuine AI companies compete for funding against companies whose “AI” is a marketing strategy.
Policy. Government policy on AI, including regulation, funding, and workforce planning, depends on an accurate understanding of what AI is and where it’s deployed. When the definition is so broad that it includes everything from ChatGPT to a spreadsheet macro, policy can’t be targeted effectively.
Trust. Every time an “AI” product disappoints because it’s actually just automation with a fancy label, trust in genuine AI erodes. The boy-who-cried-wolf effect is real. Australian businesses burned by fake AI become sceptical of real AI, slowing adoption of technology that could genuinely help them.
A Practical Test
When someone tells me their product uses AI, I apply a three-question test.
Does the system learn from data without being explicitly programmed for each scenario? In other words, would it handle a situation the developers didn’t specifically anticipate, based on patterns learned from data?
Does the system improve over time as it processes more data? If performance is static regardless of how much data flows through it, there’s no learning happening.
Could the same functionality be achieved with if-then rules written by a human? If yes, it’s automation. If no, because the patterns are too complex for manual rules, it’s genuinely AI.
Not every product needs to pass all three tests to use the AI label. But products that fail all three definitely shouldn’t.
What I’d Like to See
Honest labelling. Call automation “automation.” Call analytics “analytics.” Call AI “AI.” Each is valuable in its own right. None needs to pretend to be something it isn’t.
Technical specificity. Instead of “AI-powered,” tell me what the AI actually does. “Uses a gradient-boosted model trained on historical transaction data to predict fraud likelihood.” That’s specific enough to evaluate and too specific to fake.
Regulatory standards. As AI regulation develops in Australia, the definition of AI should be specific enough that companies can’t slap the label on anything with a computer chip.
The AI opportunity for Australian businesses is real and significant. But that opportunity is best served by clarity, not confusion. When everything is AI, nothing is AI. And that helps nobody.