Quick summary
  • Artificial intelligence is now widely used across Australian financial services, but governance and risk frameworks are not keeping pace with how quickly AI risks emerge and spread. 
  • APRA has not changed obligations, but it has highlighted that AI increases the speed, scale and impact of familiar risks, exposing weaknesses in slow, siloed and assumption‑based controls. 
  • To close the gap, boards and executives need to revisit governance, oversight and risk practices to ensure they are fit for purpose in an AI‑enabled environment.
Artificial intelligence is now firmly embedded across Australian financial services. What was once experimental is becoming operational, customer facing and increasingly central to core decision making.
Contents

This shift is forcing boards and executives to confront a harder question: whether existing governance, risk management and oversight approaches are genuinely keeping pace with how risk now emerges and evolves.

APRA’s recent letter to industry on artificial intelligence is an important reference point in that discussion. It does not introduce new requirements, but it does highlight a growing disconnect between the power and speed of AI adoption and the maturity of the frameworks used to govern it. The letter reflects an issue many organisations are now grappling with – technology capability is advancing faster than the mechanisms used to monitor, challenge and control the associated risks.

The obligations have not changed, but the risk profile has

AI does not introduce new categories of risk, nor does it fundamentally change the prudential obligations that apply to regulated entities. Those obligations already exist across risk management, operational resilience, information security and privacy, including under CPS 220, CPS 230, CPS 234 and the Privacy Act.

What has changed is the way those risks now manifest in practice.

AI increases both the likelihood and the impact of adverse outcomes by operating at speed, at scale and with reduced human friction. Risks that were once constrained by manual processes, sampling or secondary review can now propagate quickly across customers, products and systems. This has direct implications for inherent risk assessments, control effectiveness and escalation thresholds.

In other words, the risks themselves are familiar. Their behaviour is not.

Outdated governance and control practices under pressure from AI

Across the sector, the most common challenges are not a lack of intent or policy coverage, but a misalignment between governance assumptions and reality.

Firstly, many governance and risk management processes rely on periodic, detective activities such as annual risk assessments, quarterly attestations and scheduled assurance programs. In an AI‑enabled environment, the assumption that material changes in risk can be identified retrospectively no longer holds. Organisations should safely assume that an annual risk assessment process will miss material, technology‑driven changes in likelihood and impact.

Secondly, AI is reshaping the external threat environment at the same time as it is being embedded internally. Fraud, scams, social engineering and vulnerability discovery are increasingly AI‑enabled. This compresses the time between weakness and exploitation while also amplifying potential harm. Where risk frameworks have not been recalibrated to reflect this shift, they can materially understate exposure.

Thirdly, at many organisations the management of risks impacted by AI is already siloed. Cyber security, fraud, third‑party risk management, data governance and operational resilience are typically owned, controlled and assured by different functions, each operating with its own processes and priorities. While that fragmentation may have existed for some time, AI exposes it – leaving organisations without a coherent view of overall risk posture and appetite.

These are not failures of intent or policy. They are signs that existing governance and control practices are being put under sustained pressure.

Governance, oversight and what needs to change

APRA’s observations also speak directly to the role of boards and oversight functions in a faster‑moving environment. While boards are strongly engaged in the potential benefits of AI, many are still developing the technical literacy required to provide effective challenge on AI‑related risks. This is not a question of becoming experts in the technology. It is a question of whether governance arrangements are capable of keeping pace with its impact.

For risk, compliance and internal audit functions, the message is not that continuous or real‑time monitoring is required. It is that AI materially changes the environment existing frameworks are meant to manage. If risk assessments, control design and governance processes have not been revisited to reflect changes in likelihood, impact and speed, now is the time. The focus should be on whether existing controls, escalation mechanisms and oversight processes are still fit for purpose.

Internal audit has a critical role in testing whether governance arrangements are operating as intended, whether accountability is clear, and whether assumptions about timing, detection and response still hold under AI‑enabled conditions.

Closing the gap

APRA is not proposing a new rulebook. The obligations remain the same. What is changing is how those obligations need to be implemented, measured and monitored in practice.

AI is exposing the limits of slow, fragmented and assumption driven governance. Now, the gap between AI capability and the ability to govern it is widening. Unless boards and executives intervene deliberately, that gap will continue to grow, leaving governance and control practices progressively further behind the risks they are meant to manage.

Please reach out to our team if you’d like to discuss APRA’s recent letter to industry on artificial intelligence how it could impact your business.

This article is Part 1 of a three-part series on closing the risk management gap as AI becomes embedded across financial services. Part 2 will explore how the risk management gap is playing out in practice, including specific use cases, third‑party model reliance and what boards and executives should do next. Part 3 will explain what organisations need to do to close the gap.

Learn more about how our Cyber resilience services can help you
Visit our Cyber resilience page
Learn more about how our Cyber resilience services can help you