How Australian Companies Are Actually Measuring AI ROI (Spoiler: Most Aren't)


I conducted an informal survey over the past two months, asking thirty Australian companies of various sizes how they measure the return on their AI investments. The results confirm something I’ve suspected for a while: most Australian companies have no rigorous way of measuring whether their AI spending is actually generating value.

The Findings

Of the thirty companies surveyed, spanning financial services, healthcare, retail, mining, and professional services, here’s what I found.

Seven had formal ROI measurement frameworks for their AI investments with quantified metrics, baseline comparisons, and regular reporting.

Eleven had informal assessments: they could point to improvements they attributed to AI but couldn’t quantify them precisely or isolate the AI contribution from other changes.

Twelve had no ROI measurement at all. They were spending money on AI, had a general sense that it was helpful, but couldn’t demonstrate that it was worth the investment.

For a technology category that’s absorbing significant corporate budgets, that’s a sobering finding.

Why Measurement Is Hard

There are legitimate reasons why AI ROI is difficult to measure.

Attribution complexity. If you implement an AI-powered demand forecasting tool and your inventory costs drop, how much of that improvement is due to the AI versus the process changes that accompanied its implementation? Isolating the AI contribution from confounding factors is genuinely difficult.

Long time horizons. Some AI investments pay off over years, not months. Building a recommendation engine that gradually increases customer lifetime value takes time to demonstrate ROI. But boards and CFOs want quarterly metrics.

Intangible benefits. Better customer experience, faster decision-making, improved employee satisfaction from reduced tedious work. These are real benefits but difficult to quantify in dollar terms.

Baseline problems. If you didn’t measure your current performance rigorously before implementing AI, you can’t demonstrate improvement afterwards. Many companies implemented AI without establishing proper baselines, making after-the-fact ROI calculation nearly impossible.

What Good Measurement Looks Like

The seven companies with formal measurement frameworks shared some common practices.

They measured before they implemented. Baseline metrics were established before the AI system was deployed. Processing time per unit. Error rates. Cost per transaction. Customer satisfaction scores. These baselines made before-and-after comparison possible.

They isolated variables. The best frameworks attempted to control for confounding factors. A/B testing where possible, with some customers or processes using AI and others not. Where A/B testing wasn’t feasible, they used statistical methods to estimate the AI contribution.

They tracked total cost of ownership. Not just licencing fees but implementation costs, integration effort, ongoing maintenance, training, and the opportunity cost of time spent on the AI project rather than alternatives. Several companies discovered that their AI investments were profitable on a marginal basis but not when total cost was included.

They measured over appropriate time horizons. Customer-facing AI measured over customer lifecycle periods. Operational AI measured over full business cycles that captured seasonal variation. Strategic AI given multi-year evaluation windows with interim milestones.

They reported honestly. The best companies reported both successes and failures. AI projects that didn’t deliver expected ROI were documented and analysed, generating learning that improved future investments.

The Quick-Start Measurement Approach

For companies that haven’t been measuring and want to start, here’s a pragmatic approach.

Step one: Identify the specific metric the AI is supposed to improve. If you can’t name it, you can’t measure it. “Improve efficiency” isn’t a metric. “Reduce average customer inquiry response time from 4 hours to 1 hour” is.

Step two: Measure current performance for at least four weeks before any AI deployment. This is your baseline. If you’ve already deployed AI, try to reconstruct baselines from historical data, though this is less reliable.

Step three: Calculate total cost. Add up everything: subscription fees, implementation costs, internal team time, training, ongoing support. Include the cost of any process changes that accompanied the AI deployment.

Step four: Measure performance after deployment for at least twelve weeks. Track the same metrics you baselined. Also track adoption: are people actually using the AI system?

Step five: Calculate ROI. Improvement value minus total cost, divided by total cost. If the number is negative, the AI investment isn’t paying for itself and you need to either improve the implementation or redirect the investment.

The Uncomfortable Implication

If most Australian companies can’t demonstrate AI ROI, one of two things is true. Either the AI investments are generating value that isn’t being captured, or some of those investments aren’t generating value at all.

My suspicion, based on these conversations, is that it’s both. Some companies are undervaluing genuine AI benefits because they’re not measuring properly. And some companies are spending money on AI that isn’t delivering returns and would benefit from redirecting that investment.

The distinction matters. If you’re genuinely getting value but not measuring it, you need better measurement to justify continued investment and identify where to invest more. If you’re not getting value, you need to know that before you spend more.

Either way, measurement is the answer. And most Australian companies need to get significantly better at it.