EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
The EY Cyber Incident Resilience and Response solution provides ongoing incident management and uplift services across business lines. Learn more.
Read more
AI’s effectiveness depends on the quality and volume of training data it learns from. AI-powered attacks scale and automate at a level that overwhelms traditional defensive models. This is why our clients must establish and strengthen foundational security building blocks, particularly zero trust, proactive security, strong risk governance and rapid detection and response. These fundamentals remain the priority, even as the threat landscape accelerates. Resilience requires a shift in how organizations design, govern and operate their security capabilities.
An AI safety and research company, Anthropic, recently disclosed that a state sponsored threat group used an AI platform to coordinate simultaneous cyber espionage intrusions across multiple global companies and government agencies.¹ The AI system assisted with vulnerability identification, live exploitation, escalation, lateral movement and the creation of new attack paths. It also attempted to solve technical problems during active operations, reducing the time and expertise traditionally required. Similar disclosures from OpenAI and Microsoft having documented state‑affiliated misuse of large language models (LLMs),², ³ along with assessments and guidance from the UK National Cyber Security Centre and the Cybersecurity and Infrastructure Security Agency (CISA),⁴ reinforce this trajectory.
Although some outputs needed manual validation, the broader pattern was clear. AI increased the speed and enabled constant iteration of the intrusion cycle. Notably, the attack relied on existing exploits and was only partially automated. It was not a fully agentic operation. This follows the usual pattern: a new attack emerges, triggering an iterative arms race between attackers and defenders, with successful intrusions validating techniques that become more automated over time. This reinforces why organizations always need to constantly advance core cyber practices to close critical gaps before the next wave of automation matures.
This acceleration mirrors the broader environment described in the 2025 EY Global Risk Transformation Study, where risks evolve in nonlinear, accelerated, volatile and interconnected (NAVI) ways, collectively captured in the NAVI framework. AI-enabled cyber threats reflect this reality by escalating quickly, crossing organizational boundaries and challenging static controls.