Insight

AI driven vulnerability discovery is changing core cyber risk assumptions

By:
Quick summary
  • Advances in AI‑driven vulnerability discovery are significantly reducing the time between weaknesses emerging and being exploited. This challenges traditional probability assumptions embedded in cyber risk models, control testing and reporting.
  • Effective cyber governance now depends less on isolated control maturity and more on understanding underlying system architecture, legacy platforms and critical software and third‑party dependencies. 
  • As incident avoidance becomes less realistic, security teams are increasingly judged on their ability to explain exposure, prioritisation and residual risk in clear business terms. 
Recent commentary on AI and cyber risk, including analysis of Anthropic’s Project Glasswing and broader legal and governance discussions, points to a structural shift in how cyber risk materialises.

The most important implication is not that organisations should react hastily, but that several longstanding assumptions are no longer reliable. Boards and security leaders need to adjust accordingly. 

For boards, three adjustments are particularly important 

1. Treat likelihood differently, not just impact 

AI materially shortens the time between vulnerability discovery and exploitation. This does not automatically mean every organisation is at imminent risk, but it does mean that probability assessments embedded in risk models, control testing, and board reporting may now be understated. Boards should expect this to be reflected explicitly in risk appetite statements and scenario analysis. 

2. Expect architectural clarity, not just control maturity 

Major incidents are increasingly the result of accumulated design and dependency decisions, not isolated control failures. Boards should be asking whether critical systems, including legacy platforms, are architecturally resilient in an environment where continuous vulnerability discovery is becoming the norm. 

3. Re‑frame third‑party and software dependency risk 

The practical attack surface for most organisations sits well beyond systems they directly operate. AI increases the speed at which weaknesses in software supply chains and service providers are identified and exploited. Oversight needs to focus on material dependencies, not exhaustive supplier lists. 

For CISOs and security teams, this shift requires a change in mindset 

The goal can no longer be to assume that most systems are largely safe most of the time. Vulnerability discovery should be treated as continuous and expected. Security functions need to spend less effort defending historic patch cycles and more time explaining exposure, prioritisation, and residual risk in business terms. 

Security leadership will increasingly be assessed not by the absence of incidents, but by the quality of insight provided to executives and boards about what can realistically be defended, redesigned, or consciously accepted. 

For regulated entities, the implication is clear but not alarmist 

This is not a call for rushed technology change. It is a call to ensure that governance frameworks, operational resilience arrangements, and risk assessments reflect current realities, particularly where regulatory expectations assume timely identification and management of material risk. 

AI has not created a new category of cyber risk; the impact of AI is in compressed timelines and invalidated comfortable assumptions. Boards and executives who adjust their oversight models deliberately, rather than reactively, will be better positioned than those who wait for the first unexpected incident. 

Learn more about how our Cyber resilience services can help you
Visit our Cyber resilience page
Learn more about how our Cyber resilience services can help you