APRA calls for step-change in AI risk management across financial sector

0

The Australian Prudential Regulation Authority (APRA) has urged banks, insurers and superannuation trustees to lift how they manage artificial intelligence-related risks, warning that current governance and operational practices are not keeping pace with rapid AI adoption.

In a letter to industry published today, APRA said governance, risk management, assurance and operational resilience practices were lagging behind the “scale, speed, and complexity” of AI deployment. The regulator’s comments follow a targeted supervisory review conducted late last year across APRA-regulated industries to examine how AI is being deployed and governed.

APRA said the expanded use of advanced AI is introducing new financial and operational vulnerabilities, and that information security practices are “struggling to keep up with the pace of change”. The letter also flagged frontier AI models such as Anthropic’s Claude Mythos, warning they could increase the probability, speed and scale of cyberattacks by enabling bad actors to discover vulnerabilities faster.

Among APRA’s observations were that AI use is moving from experimentation to customer-facing applications faster than governance arrangements are maturing; boards are interested in AI’s potential benefits but many lack the technical literacy to effectively challenge management on AI-related risks; and some entities face heightened concentration risk through dependence on a single provider for multiple AI use cases, coupled with gaps in contingency planning.

The regulator also warned that AI capabilities are increasingly embedded in broader software platforms and developer tools, reducing transparency over how models are trained, updated or constrained, and limiting entities’ ability to assess and manage risk. It said AI risks often span multiple domains—including operational resilience, cyber and information security, privacy and procurement—while existing change and assurance approaches can be fragmented.

APRA Member Therese McCarthy Hockey said regulated entities needed to continually adjust cyber practices to lift resilience in a fast-moving threat environment.

“The AI revolution presents tremendous opportunities for banks, insurers and superannuation trustees to deliver improved efficiency and enhanced customer services. We are already beginning to see these benefits materialise. But we cannot be blind to the risks of such powerful technology – whether in our own hands or the hands of those with malign intent.

“What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI-accelerated threat.

“The findings outlined in today’s letter emphasise our expectations for how entities should be managing these risks in alignment with our prudential standards in areas such as information security, operational risk management, governance and data risk.

“While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.

“In the meantime, APRA will continue engaging with government agencies, entities and peer regulators, domestically and overseas, to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system.”

Separate industry commentary responding to the APRA warning argued that the main risk is the shrinking time between vulnerability discovery and exploitation, particularly in highly interconnected banking environments.

Raghu Nandakumara, vice president of industry strategy at Illumio, said that when attackers can identify and exploit vulnerabilities “in a matter of moments”, organisations may not be able to respond quickly enough—especially in complex, legacy-heavy environments where response actions depend on human intervention.

“Ultimately, the impact of an AI-driven attack is not determined by whether a breach occurs, but by how far an attacker can move within the system once inside,” Nandakumara said in the commentary. He argued that resilience depends on limiting an attacker’s reach through architectural changes and controls that restrict access to vulnerable workloads.

APRA’s letter to industry is available on its website: APRA Letter to Industry on Artificial Intelligence (AI).

Share.