AI Governance: What Businesses Need to Know in the Age of Intelligent Chaos
Published

Aubrey Blanche argues that the real revolution in AI isn’t about speed or scale, it’s about building the maturity, governance, and ethical discipline to use intelligence responsibly before it turns into chaos.
When Aubrey Blanche walks into a room, she does so with purpose. Fresh from the hairdresser, her hair streaked in rainbow tones, she’s immediately memorable — a sharp, fast-talking technologist whose confidence comes from years spent navigating the intersection of technology, culture, and ethics.
Blanche has a robust CV. She is the founder of The Mathpath, an advisory to help businesses build equitable organisations and is also in a new Director role at independent not-for-profit, The Ethics Centre. She has worked at both Atlassian and Culture Amp and is now completing a master’s in AI ethics at Cambridge building a career that sits at the intersection of data, design, and human behavior. She isn’t interested in AI as a shiny toy. She’s interested in what happens when it actually lands inside organisations and what happens when it doesn’t.
Blanche believes that as organisations rush to deploy generative tools, the real risk isn’t that AI will outthink us, it’s that we’ll implement it thoughtlessly. “People are going, ‘Oh, use this tool. Use this tool,’” she laughs. “Not, ‘We’ve identified this problem and we believe AI is the best tool to solve that problem.’”
This goes right to the heart of a new issue: companies are embracing AI for the optics, not the outcomes. Leaders are told by boards or competitors that they must “use AI,” but few can articulate why or to what end. The result is a flurry of pilot projects, chatbots, and agents that look innovative but lack strategic purpose.
It is, she argues, like giving a team a hammer and telling them to start building, without deciding whether they’re constructing a house, a bridge, or a coffee table. Mature organisations start with the problem, not the technology. They treat AI adoption like any other transformation: define the goal, assign ownership, and measure success beyond novelty.
The second challenge is governance, or rather, the lack of it. “Executives are saying, ‘Use AI.’ We have a bunch of people who are using it either authorised or unauthorised,” Blanche explains. “Do you actually want everyone in your business building their own agents? In what other business are you saying, ‘Let’s invent 120 versions of the same process.’ ”
Across industries, AI adoption is happening faster than oversight can catch up. Marketing teams build custom GPTs; operations experiment with automated workflows; HR tries generative analytics often without IT or compliance visibility. While enthusiasm fuels innovation, it also multiplies risk.
Blanche calls this “bottom-up chaos.” Without a central framework for what’s allowed and what’s risky, organisations can’t manage exposure. Sensitive data seeps into public models, unvetted outputs shape customer interactions, and duplication wastes resources.
Her message is pragmatic. Governance isn’t bureaucracy; it’s an efficiency measure. Clear policies, permissions, and accountability channels protect both creativity and compliance. “If you wouldn’t launch into a new market without strategy,” she says, “why launch into AI without one?”
The biggest threat to data privacy isn’t malicious, it’s accidental. “Someone’s like, ‘I’m thinking about exporting our survey results and putting them in ChatGPT to get answers,’ ” Blanche recalls. “They don’t realize they may be exposing non-anonymized data.”
Employees eager to save time often paste proprietary or personal information into public tools, unaware of how those systems retain or reuse content. For an enterprise, that’s not just a data breach, it’s a reputational event.
Blanche emphasises that this kind of misuse comes from good intentions. “Nobody knows what I’ve worked with because most of them are in sales. They don’t do AI. They’re just trying to use it.”
The answer, she says, lies in education, not punishment. Every employee should understand the basics of how AI models handle data. Organisations should provide enterprise-grade tools and clear, simple rules. “Make it easy for people to do the right thing,” she says, “and impossible to do the wrong one.”
Blanche believes the next phase of AI maturity will depend on “responsibility by design”, embedding ethical and compliance safeguards into the systems themselves. “Start from the beginning and prevent them from doing bad things,” she says.
Rather than relying on user judgment, products and platforms should include built-in consent controls, enterprise privacy settings, and audit transparency. For smaller businesses that lack internal legal teams, the safest route is to partner with vendors who hard-code governance into their offerings.
Responsible AI isn’t about slowing innovation; it’s about ensuring it scales sustainably. “You don’t hope your users behave ethically,” Blanche says. “You design so they can’t do anything else.”
Then there’s the cost problem, one that few leaders are yet accounting for. Platforms like Claude Code, Loveable and Curzor have increased prices multiple times in 2025 in a bid to find profitability and live up to huge valuations. “I know one startup that bought in at $10,000 for a particular number of licenses, explains Blanche, “and then they changed the terms on him, suddenly he’s got a $30,000 bill. And for an SME that’s fatal.”
Generative AI may save labour hours, but it introduces new and volatile operating costs. Usage-based pricing, compute spikes, and shifting terms can wipe out projected savings. Businesses that fail to model these expenses will be blindsided.
Blanche predicts steep cost inflation over the next three years as demand, regulation, and energy consumption rise. “If we don’t see a five-times increase in cost,” she says, “I’d be astonished.”
The lesson: AI should be treated like cloud infrastructure, a continuous investment that must be tracked, justified, and optimised. Efficiency doesn’t mean cheap; it means sustainable. “Everybody is going to need to be an AI ethicist, like an armchair ethicist,” Blanche says. “What I want to hand you is a framework to think through issues.”
In her view, ethics isn’t a philosophical side note, it’s a core business skill. Every employee, from engineer to executive, should be capable of evaluating the moral and practical implications of automation. Ethical literacy helps companies avoid risk, but it also improves decisions. Teams that can ask should we? before can we? will move faster and with more confidence.
“The real competitive advantage in the age of intelligent systems,” Blanche says, isn’t intelligence. It’s judgment. “I’m not saying AI is good or bad… I’m asking under what conditions it’s really good.”
AI will reshape every industry, but whether it creates value or chaos depends on the choices made now. The businesses that win the next decade won’t be the ones that adopt AI first. They’ll be the ones that adopt it wisely.
Because the true revolution ahead isn’t artificial intelligence. It’s responsible intelligence.
Latest
-
Read more: How Australia’s Food Capital Is Dishing Up Business WisdomHow Australia’s Food Capital Is Dishing Up Business Wisdom
Discover what food critic Dani Valent has learned from Melbourne hospitality about profit purpose community and resilience for small businesses…
-
Read more: The Wage Trend Helping Businesses Find Their BalanceThe Wage Trend Helping Businesses Find Their Balance
Wage growth has steadied at 3.4 percent giving Australian employers more predictability on pay while compliance costs rise. Learn how…
-
Read more: Now Is A Perfect Time To Start Exploring AINow Is A Perfect Time To Start Exploring AI
Australian small businesses are still early in their AI journey. See why starting now can unlock time savings and a…




















