
Australian companies tout AI responsibility, but global scrutiny, including from the Vatican, demands more than slogans.
A growing number of Australian businesses are weaving ‘ethical AI’ into their mission statements, policies, and press releases. Yet despite the rhetoric, most are falling short on delivery. According to the Australian Responsible AI Index 2024, while 82 per cent of organisations say they’re using AI responsibly, only 24 per cent have concrete governance structures to back that up.
A report by KPMG claims that of those workers surveyed, 48 per cent admit to using AI in ways that contravene company policies, 57 per cent rely on AI output without evaluating accuracy and 50 per cent are making mistakes in their work due to AI.
This credibility gap isn’t going unnoticed. At a recent Vatican summit, Pope Leo XIV urged world leaders and tech giants to prioritise AI that “serves humanity, not markets.” The Pontiff’s remarks landed just as ethical AI became the new litmus test for corporate integrity, from Sydney boardrooms to Silicon Valley.
Local Frameworks, Limited Adoption
Australia is not without direction. The government has introduced voluntary AI Safety Standards and endorsed the AI Ethics Principles, aiming to steer developers and users toward fair, transparent, and accountable systems.
Still, many small and mid-sized businesses are struggling to implement these ideals. A recent National AI Centre report noted that SMEs often lack the resources or expertise to properly integrate ethical considerations into AI deployment. Meanwhile, larger enterprises are facing increasing stakeholder pressure to move beyond performative compliance.
ASIC’s 2024 review of financial firms flagged a “governance gap” in AI oversight, warning that under-regulated use of machine learning models could expose consumers to bias and misinformation. These findings align with the Responsible AI Index, which shows companies typically adopt just 12 out of 38 recommended practices.
Telstra, the Vatican, and a Global Wake-Up Call
In a notable example of leadership, Telstra recently joined a United Nations panel on ethical AI. The move came as the telco faced its own internal shake-ups, dumping carbon offsets and raising broadband fees, highlighting a pivot toward more measurable, transparent accountability.
Globally, Pope Leo XIV’s Vatican gathering added urgency. “Technology without conscience is power without direction,” the Pope warned. His call for a shared ethical standard across nations resonated far beyond the church, with tech leaders and regulators in attendance pledging renewed scrutiny of corporate claims.
For Aussie companies, the message is clear: ethical AI can no longer be a vague ambition. It must be operationalised through cross-functional oversight, risk modelling, and robust stakeholder engagement.
Practical Tools and Industry Momentum
Consultancies like Melotti AI Ethics are emerging to help Australian firms navigate this complex landscape. These groups offer frameworks aligned with national guidance and international best practice, helping companies bridge the gap between ideals and implementation.
The Australian Government’s AI Ethics Principles, when adopted meaningfully, provide a strong foundation. Yet they must be paired with internal accountability: ethics committees, red-teaming for bias, transparent reporting, and consumer feedback loops.
Some sectors are moving faster than others. In health and fintech, the consequences of unethical AI, such as patient misdiagnosis or predatory lending, are spurring action. Elsewhere, progress remains uneven.
From Buzzword to Business Mandate
Whether in the Vatican or Victoria, the momentum around ethical AI is accelerating. But the credibility of this movement hinges on business leaders doing more than talk.
Australian firms must treat ethics not as a compliance checkbox but as a strategic differentiator. That means investing in talent, setting clear benchmarks, and being honest about the limitations of current systems.
As Pope Leo XIV reminded the world: “The future of AI must be human. Not because machines threaten us, but because we risk forgetting ourselves.”
Latest
-
5 Reasons You’re Not Landing the Job And What Experts Say You Should Do Instead
Published If you’ve got the right experience and real impact stories to tell but you’re still not landing the role,…
-
Vibe Coding: The AI Powered Skill That Could Get You Hired Anywhere
Published With the advent of AI-powered prompt-engineering – ‘vibe coding’ – the barriers to platform building have not just been…
-
Youth Wages Surge As Demand for Young Talent Grows
Published Australian businesses are betting big on Gen Z employees, Employment Hero’s latest Jobs Report reveals. Australia’s youngest workers are…