- As AI-linked cyber incidents rise, the share of cybersecurity budgets allocated to AI solutions is expected to significantly increase
- Leaders expect agentic AI to become more central in cyber defense over the next two years, with AI governance emerging as a key driver of cyber resilience
NEW YORK – Cybersecurity leaders are at a critical crossroads as artificial intelligence (AI) redefines both the threat landscape and defensive capabilities, according to findings from the new EY Cybersecurity Roadmap Study of 500 senior corporate security leaders.
The study uncovered that a staggering 96% of senior security leaders say AI-enabled cybersecurity attacks are a significant threat to their organization, with about half (48%) estimating that of all the cybersecurity incidents their organization had experienced in the past year, at least a quarter were enabled by AI. At the same time, less than half of senior security leaders are strongly confident in their organization’s ability to defend against a major security breach enabled by AI.
“Security leaders have been rapidly bolting on AI solutions to stay ahead of AI-driven cyber threats, but their lack of confidence in defenses signals a need for reimagining security architecture with AI at the core,” says Ganesh Devarajan, Americas Consulting Cyber Risk Practice Leader. “Cyber leaders can’t just automate yesterday’s defenses; they must move toward an AI-native posture that embeds cyber as a foundational layer of trust across enterprise AI.”
Investments in cyber and AI defenses set for growth
Security leaders are responding to the increased threats by moving aggressively toward autonomous defenses with virtually every respondent confident that the strategic use of AI will transform their organization’s proactive (99%) and defensive (99%) cybersecurity strategies. But a majority (85%) of senior security leaders who are using AI in cybersecurity say that their current cybersecurity budget is insufficient to meet AI-enabled threats.
That is set for a dramatic shift as respondents say their AI defense spending will rapidly increase. The number of organizations dedicating at least a quarter of their total cybersecurity budget to AI solutions for cybersecurity is set to roughly quintuple, jumping from only 9% of organizations spending at that level today to 48% in two years.
“Budget increases create the opportunity for cyber leaders to strategically invest to move from automating simple tasks to advanced agentic AI systems that can undertake complex, multi-step actions across products and ecosystems simulating human responses to attacks,” says Devarajan.
Agentic AI set to take center stage
Nearly all senior security leaders (97%) agree that their organization’s competitive advantage in the next two years will be directly tied to the maturity of their agentic AI cybersecurity defenses. In fact, the number of senior security leaders saying the following areas will be largely run with agentic AI is set to roughly double in two years:
- Advanced persistent threat (APT) detection: from 30% currently to 62% in two years
- Real-time fraud detection: from 32% currently to 58%
- Identity and access management (IAM): from 23% currently to 51%
- Third-party risk management: from 25% currently to 50%
- Data privacy and compliance: from 27% currently to 48%
- Deep fake and impersonation defense: from 23% currently to 42%
AI governance maturity gap
Structured governance has emerged as the essential bridge between risk and resilience as organizations race to adopt AI and autonomous systems. Virtually all senior security leaders report having an AI cybersecurity governance framework in place, and, among those, 98% agree that framework has proven essential for ensuring the responsible use of AI.
But a significant gap remains between policy and full adoption and enforcement of frameworks: currently only 20% of organizations have successfully optimized these frameworks and embedded them into their organizational culture. The majority have less mature adoption: 51% of senior security leaders report having a defined AI cybersecurity governance framework that is implemented and embedded in key processes and 26% report their framework is fully rolled out and integrated across relevant business units.
“The proliferation of AI cyber threats is an operational reality that puts the limitations of legacy frameworks on full display,” says Devarajan. “Organizations must move beyond standalone cyber defenses and risk management toward a system of architecting trust across governance, compliance and ethics that turns AI from a risk into a competitive advantage.”
For more about cybersecurity and the Trust Layer of EY.ai Value Blueprints, visit https://www.ey.com/en_us/services/ai/value-blueprints
About the EY Cybersecurity Roadmap Study
The research was conducted via an online survey of n=500 senior security leaders defined as full-time US employees at the director level and above (including 216 C-suite and 284 leaders below the C-suite level) who manage their organization’s information security, including data and systems, at organizations with at least $500m in annual revenue across 12 industries. The survey was fielded between December 19, 2025, and January 8, 2026. The margin of error (MOE) for the total sample is +/- 4 percentage points at the 95% confidence interval.
Industries surveyed include Banking and Capital Markets (n=50), Wealth and Asset Management (n=50), Oil and Gas (n=30), Consumer Products (n=50), Technology (n=30), Industrial Products (n=30), Life Sciences (n=50), Private Equity (n=30), Retail (n=50), Media and Entertainment (n=30), Healthcare (n=50), Insurance (n=50). Weighting was applied to distribute the sample evenly across all industries.
About EY
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.
Ernst & Young LLP is a client-serving member firm of Ernst & Young Global Limited operating in the US.
Contact: Lizzie McWilliams