AI Bias in Australian Lending: The Problem Nobody Wants to Talk About
I’ve been investigating AI-driven lending decisions in Australia for six months. What I’ve found is concerning enough that I think it needs a proper public airing.
Several Australian lenders, including major banks and online lending platforms, are using AI systems for credit assessment. These systems influence who gets approved, who gets declined, and what interest rate they pay. And there’s evidence that some of these systems produce outcomes that disadvantage specific demographic groups.
What the Data Shows
I obtained aggregated lending outcome data from three sources: publicly available APRA reports, consumer advocacy organisations, and an anonymous source within a major lender’s risk team.
The patterns are troubling. Approval rates for applicants in lower-socioeconomic postcodes are declining faster than credit risk data alone would justify. Certain industries associated with cultural communities show higher decline rates even when financial metrics are comparable. And interest rate spreads between demographic groups have widened since AI-assisted decisioning was introduced at several lenders.
Correlation isn’t causation. These patterns could reflect genuine risk differences that I can’t observe in the data available to me. But the consistency of the patterns across multiple lenders and the timing of their emergence, coinciding with AI adoption, warrant investigation.
How AI Bias Creeps In
AI bias in lending doesn’t require anyone to have discriminatory intent. It happens through data and design.
Historical data reflects historical discrimination. If past lending decisions were influenced by bias (and they were, demonstrably), then AI models trained on that historical data will learn those biases. An AI system that learns from twenty years of lending data is learning from twenty years of human decisions, including the biased ones.
Proxy variables. Even when lenders remove protected attributes (gender, ethnicity, religion) from their AI models, other variables can serve as proxies. Postcode correlates with ethnicity. Employer name correlates with cultural background. Transaction patterns correlate with socioeconomic status. An AI model sophisticated enough to predict credit risk is sophisticated enough to find these correlations and use them.
Objective function design. AI models optimise for whatever they’re told to optimise for. A model optimised purely for default prediction will learn to avoid any characteristics associated with higher default rates, even if those associations reflect systemic disadvantage rather than individual creditworthiness.
What Australian Law Says
Australian discrimination law prohibits lending decisions based on protected attributes including race, sex, disability, and age. The question is whether AI systems that use proxy variables to achieve effectively discriminatory outcomes violate these laws even when protected attributes aren’t direct inputs.
The legal consensus is developing, but several legal experts I’ve spoken with believe that outcome-based discrimination, where the effect is discriminatory regardless of the mechanism, is actionable under both federal and state anti-discrimination legislation.
APRA’s AI guidance requires regulated entities to test for discriminatory outcomes. But compliance with this guidance varies, and the testing methodologies aren’t standardised. A lender could conduct superficial bias testing that misses subtle patterns.
The ACCC has also flagged algorithmic pricing and decision-making as a priority area. Consumer lending decisions that systematically disadvantage particular groups could attract scrutiny under Australian Consumer Law as well.
What Lenders Should Be Doing
Responsible lenders should be taking three steps.
Comprehensive bias testing. Not just testing whether protected attributes are direct model inputs, but testing whether model outputs differ systematically across demographic groups. This means analysing approval rates, interest rates, and credit limits by postcode, age group, gender, and other dimensions, controlling for legitimate risk factors.
Explainability infrastructure. For every lending decision, the lender should be able to explain which factors drove the AI’s recommendation. If the explanation doesn’t make sense without reference to demographic characteristics, the model needs revision.
Regular external audits. Internal testing has inherent conflicts of interest. External audits by independent specialists who understand both AI and discrimination law provide accountability that internal processes can’t.
What Consumers Can Do
If you’ve been declined for credit or offered unfavourable terms and you suspect AI bias played a role, several options exist.
Request an explanation of the decision. Under Australian Privacy Principles, you have a right to know what personal information was used in decisions that affect you. If the lender can’t explain why you were declined beyond “the model scored you below threshold,” that’s a red flag.
Contact the Australian Financial Complaints Authority (AFCA). AFCA handles disputes about lending decisions and can investigate whether a decision was fair and reasonable.
Report concerns to the Australian Human Rights Commission if you believe the decision was discriminatory.
The Bigger Picture
AI in lending has enormous potential to improve fairness by removing human prejudice from credit decisions. That’s the promise. But it only works if the systems are designed and monitored to actually achieve fairness rather than to automate and scale existing biases.
Australian lenders adopting AI for credit decisions have an obligation to ensure those systems don’t discriminate. The technology to test for bias exists. The regulatory expectations are clear. The missing ingredient is institutional commitment to doing the work.
This is a story I’ll continue to follow. If you work in lending and have information about AI bias in credit decisions, I’d like to hear from you.