koluglju bridge northern lights Aurora borealis Arctic Iceland
koluglju bridge northern lights Aurora borealis Arctic Iceland

How can reimagining your cyber guardrails accelerate AI value?

Related topics

In a nonlinear, accelerated, volatile and interconnected cybersphere, enterprise-wide AI adoption is safer and faster with cybersecurity guardrails.


In brief

    • CISOs can manage and invest with confidence when they understand the interwoven characteristics that define today’s complex cybersecurity landscape.
    • Cybersecurity functions should develop clearly defined yet adaptable guardrails to help the business deploy and accelerate adoption of AI with confidence.

    Half of all organizations reported they have been negatively impacted by cybersecurity vulnerabilities introduced by Artificial Intelligence (AI) systems, the October 2025 EY Global Responsible AI Pulse survey found. The cost is high: average losses top US$4.4m for organizations that experienced AI-related incidents.

    Vulnerabilities in AI systems
    of organizations have been negatively impacted by cyber vulnerabilities in AI systems

    Cyber adversaries are compromising organizations by targeting AI systems across new vectors, using new methods, including data poisoning, prompt injection and model theft. AI is also part of the cyber adversary’s toolkit, increasing their speed and reach, and potentially giving them unanticipated attack methods in the near future.

    In this increasingly complex cyberspace, how can CISOs help secure the rollout of AI and, in the process, increase the value the cybersecurity function contributes to the enterprise?

    Drawing on EY research, interviews and secondary data, this article first frames the cybersecurity landscape within an AI-accelerated, interconnected environment, and then defines a set of cybersecurity “guardrails” CISOs can use to help confidently facilitate AI adoption across the enterprise.

    Huge thunderstorm over the city with the powerful lightnings in the sky
    1

    Chapter 1

    A new cybersecurity threat landscape, defined by NAVI

    The cybersecurity landscape is becoming more nonlinear, accelerated, volatile and interconnected.

    Organizations are still investing heavily to bolster their cybersecurity functions. Among companies with more than US$1b in revenue, 72% spend US$10m or more on cybersecurity, with over a quarter spending US$100m or more, according to EY research, and Gartner estimates total cybersecurity spend will increase by 10% in 2025.1 But cybersecurity functions are not just spending away their problems; they are being innovative and strategic with their investments, developing AI-powered threat detection and response capabilities and better integrating security into new enterprise-wide initiatives.

     

    This is all an attempt to stay ahead of the threat. Cybercriminals are also well-resourced but are not bound by corporate governance rules, allowing them to experiment and innovate rapidly and relentlessly probe for weaknesses in their opponents’ defenses.

     

    But trying to keep score in the cybersecurity arms race is a futile exercise. How do you account for a company’s perfect cybersecurity track record when it experiences a severe data breach tomorrow?

     

    Instead, CISOs can benefit by better understanding the interwoven characteristics that define today’s complex cybersecurity landscape.

    In the NAVI world, change is increasingly:

    • Nonlinear, triggering sudden tipping points that can catch companies by surprise
    • Accelerated, demanding increased speed of response
    • Volatile, with frequent changes in direction that test companies’ agility
    • Interconnected, setting off cascades of downstream impacts

    By understanding the new NAVI operating environment, CISOs will be able to identify the root causes and structural trends driving change, and make better-informed decisions.

    1. Nonlinear

    In the same way “vibe coding” with AI has made code writing possible for non-coders, “vibe hacking” has the potential to bring cybercrime to the masses. The latest AI advances represent a tipping point for cybercrime, increasing both the number of viable threat actors and the number of victims that can be simultaneously targeted.

    In August 2025, Anthropic revealed that a cybercriminal used its AI coding assistant, Claude Code, to carry out a data extortion operation against 17 organizations across multiple countries, including a defense contractor, health care providers and a financial institution. At each step of the attack, the cybercriminal used Claude Code to consult and operate, supporting reconnaissance, exploitation, lateral movement and data exfiltration.

    “AI lowers the bar required for cybercriminals to carry out sophisticated attacks,” said Rick Hemsley, EY UK&I Cybersecurity Leader. “Cyberattacking skills that used to take time and experience to develop are now more easily accessible, for free, for a greater number of cybercriminals than ever before.”

    The increasing number of viable actors presents a challenge not just for organizations but also for their regulators. Whereas before, regulators and government agencies could focus some efforts on known groups of hackers or advanced persistent threats, AI might further decentralize the threat landscape by quickly arming new groups across new geographies or by providing “lone wolf” actors with the skills previously held by multiple actors working as a group.

    Cybercriminals are also using AI to target more victims at once. Social engineering techniques like phishing, voice phishing (vishing) and deepfake vishing are most effective when they are most convincing. In the past, it took time to create a convincing lure for a victim. Now, cybercriminals can launch personalized phishing and vishing scam campaigns to many victims at once with generative AI tools. CrowdStrike detected a 442% increase in vishing intrusions in the second half of 2024, a trend expected to continue through 2025.2
     

    Vishing intrusions
    increase in vishing intrusions in the second half of 2024

    Future quantum computing breakthroughs might represent tipping points that trigger nonlinear change for cybersecurity. Quantum could rapidly accelerate cyberattacks by breaking widely used encryption algorithms instantly, rendering current data protection methods obsolete.

    2. Accelerated

    The average e-crime breakout time — the time needed for an attacker to start moving laterally across a victim’s network — was 48 minutes in 2024, down from 62 minutes in 2023 and 79 minutes in 2022, according to CrowdStrike.


    Accelerating breakout times are dangerous. When attackers become established in a network, they can gain deeper control and are harder to extract. In a September 2025 cyberattack that canceled and delayed flights for days across Europe, a compromised software provider rebuilt its systems and relaunched them, only to realize the hackers were still inside the system.

    Beyond breakout times, other aspects of the cybersecurity landscape are accelerating. The software-as-a-service (SaaS) market has boomed in recent years. Worldwide revenue for enterprise applications will reach US$385.2b in 2026, as estimated by IDC — a nearly 40% increase from 2022, with most of this growth attributed to investments in public cloud software. Building applications in the cloud has helped SaaS providers’ customers boost innovation and efficiency, expand rapidly and better serve customers. But accelerated product and feature rollouts to keep pace with fierce competition can come at the expense of security. Cyberattacks on smaller, fast-moving SaaS providers often impact their customers, due to data sharing and tight technology integration.

    Similarly, organizations are accelerating internal AI initiatives. While doing so, leaders recognize that speed comes with risk: only 14% of CEOs believe AI data protection is strongly safeguarded in their organizations, according to a recent EY Responsible AI Pulse survey

    Only
    of CEOs believe AI data protection is strongly safeguarded in their organizations.

    “As businesses accelerate AI and technology adoption, they should consider cybersecurity implications from the outset,” Ayan Roy, EY Americas Cybersecurity Leader, said. “Done right, cybersecurity should not slow adoption but should encourage safer, faster innovation across the business.”

    3. Volatile

    Increased geopolitical and regulatory volatility is impacting cybersecurity. Almost 60% of organizations said geopolitical tensions affected their cybersecurity strategy in 2025, according to the World Economic Forum.3 That isn’t surprising — recent years of increased geopolitical volatility have had many knock-on effects in the cybersphere for both businesses and governments.

    Geopolitical volatility
    of organizations said geopolitical tensions affected their cybersecurity strategy

    Leaders don’t expect this volatility to abate in the near future. More than half (57%) expect geopolitical and economic uncertainty to last longer than a year, with nearly a quarter (24%) forecasting longer than three years, according to the September 2025 EY-Parthenon CEO Outlook Survey.

    Critical infrastructure — for utilities, transportation, communications and energy — can be impacted by geopolitical volatility when targeted by state-sponsored cyberattacks. These attacks ramp up tensions but don’t usually lead to conventional warfare, making them a popular method to prod a foe without declaring war. For businesses, critical infrastructure outages can lead to factory downtime, supply chain and transportation disruptions, physical asset damage and more.

    These same pieces of public infrastructure can also be second-order victims of cyberattacks when a third-party supplier is targeted. Cybercriminals might be incentivized to target businesses that support high-profile pieces of infrastructure — like airports or train systems — to build public pressure for a quick fix that may come from a ransom payment.

    Regulatory volatility also impacts cybersecurity for organizations. “Politics are realigning and growing more polarized, increasing the likelihood of significant swings in policy from one election to the next,” said Catherine Friday, EY Global Government & Infrastructure Industry Leader. 

    When it comes to regulation, the cyberspace is not borderless. So, for multinational companies, the picture is especially complex. This is currently in focus with AI regulation, which is at different stages in different parts of the world, resulting in an ever-changing patchwork of policies to comply with.

    “Multinational companies face complex cybersecurity, AI, data and other technology regulations from multiple jurisdictions,” Piotr Ciepiela, EY Global Government and Infrastructure Cyber Leader, said. “The smartest companies design compliance into their technology, so they can respond to regulatory volatility with adjustments, not overhauls.”

    4. Interconnected

    Organizations thrive when they form strong partnerships with suppliers. Cybercrime thrives on large attack surfaces, like those formed by an interconnected ecosystem of third parties with varying levels of cybersecurity maturity.

    As organizations build internal AI functions, most rely on third parties for large language models (LLMs), since building LLMs from scratch is expensive and requires massive compute resources.

    This hybrid approach to AI development — rapid development of internal tools using external resources — is no different from how other internal technologies are developed. But the tradeoff is increased cybersecurity risk. According to the 2025 EY Global Third-Party Risk Management Survey, TPRM programs scan for cybersecurity risk more often than any other risk.

    Organizational complexity is also increasing. “In a world where organizations are becoming more complex and interconnected, within a cyber landscape that is ever-changing, the stakes for CISOs are raised. They not only need to ensure that enterprise-wide AI initiatives are secure, but they also need to secure their ecosystem in collaboration with third parties,” Rudrani Djwalapersad, EY Global Cyber Risk and Cyber Resilience Lead, said.

    Just within the cybersecurity function, organizations use an average of 47 tools, according to EY research. On an even more granular level, employees recognize risks in their AI experimentation: EY research (via ey.com US) found that 39% of them are not confident in using AI responsibly.


    Estremadura. Spain.
    2

    Chapter 2

    Cybersecurity guardrails to improve enterprise-wide AI adoption

    A NAVI world poses vexing cybersecurity challenges for AI adoption. Done right, the cybersecurity function can increase both speed and security of adoption.

    Nearly every business function makes a case to be involved “from the outset” of AI initiatives, each for good reason. For the cybersecurity function, “shifting left” — performing security testing earlier in the software development lifecycle — is both a compelling mantra and an effective policy to safeguard new technology. But for technologists who want to “move fast and break things” and for business leaders who want to beat the competition to market, it can seem unwieldy.

     

    Simply shifting left also isn’t an effective strategy for minimizing cybersecurity risks in the NAVI world. Shorter technology development cycles and highly adaptable cybercriminals demand a more resilient approach to cybersecurity.

     

    It is more effective to focus on a set of clear cybersecurity “guardrails” that help increase both speed and security of AI adoption. Guardrails are a clearer, more adaptable way to embed security into AI initiatives — one that integrates into existing systems, accelerates adoption and gives stakeholders confidence that key risks are being managed from day one. Guardrails are also more compatible with responsible AI initiatives. Both programs aim to build trust and manage risk, and their integration strengthens each while amplifying visibility and support across the enterprise.

    Cybersecurity guardrails help leading CISOs better integrate into key strategic decisions, earlier. And early integration leads to larger value creation from the cybersecurity function, as our 2025 EY Global Cybersecurity Leadership Insights Study found.

    Here are five guardrails that leading CISOs are using to create value in their organizations and mitigate cybersecurity risks in the NAVI world:

    1. Safeguard the human risk factor

    Leading CISOs are minimizing human risk factors by protecting the human-AI interface and reducing opportunities for employees to be exploited as the weakest link. “Technology alone can’t secure an organization. Companies that invest in reducing human risk through awareness, culture and accountability will be far more resilient against modern cyber threats than those that rely solely on tools,” said Bill Fryberger, EY Americas Cybersecurity Advisory Leader.

    Organizations are implementing stronger identity and access controls, modernized insider threat programs and rolling out tailored, risk-based awareness training to their employees to curb human error, prevent social engineering campaigns and avoid misconfigurations that could lead to breaches.

    Risk mitigated:

    • Human error: Exploiting human error may increase as employees independently “toy around” with AI tools in their day-to-day tasks. According to the October 2025 EY Responsible AI Pulse survey, 68% of organizations allow “citizen developers” (employees independently developing or deploying AI agents). However, only six in 10 provide formal guidance to their employees.
    • Targeted AI vishing, phishing and social engineering campaigns: Already an effective method to gain unauthorized access in a cyberattack, these campaigns are on the rise — AI-generated phishing emails rose by 67% in 2025.4
    • Inadvertent misconfigurations that cause data breaches.

    2. Secure data used in AI initiatives

    Data is the foundation of any AI system, so leading CISOs are working to secure every type, whether it is user input data to help tailor results for organizational contexts, training and fine-tuning data to help build foundational models, or labeled data for validating model outputs. They focus on safeguarding the confidentiality, integrity and availability of data — deploying defenses against data poisoning and injection attacks, tightening controls on third-party and sensitive data, and applying strong encryption to reduce exposure of confidential information and ensure AI systems are trained on trusted sources.

    Risks mitigated:

    • Data poisoning: Data poisoning can reduce model accuracy by up to 27% in image recognition and 22% in fraud detection, making it a high priority for CISOs to address.5
    • Use of confidential, sensitive or Personally Identifiable Information (PII) data to train AI systems and agents: See below case study to learn how Microsoft 365 Copilot maintains strict data standards.
    • Data leaks from AI outputs: AI can reveal sensitive information by accident or on purpose. For instance, in a recent prompt injection challenge, 88% of participants were able to trick GenAI into giving away sensitive information.6

    3. Re-engineer AI threat detection and response

    Leading CISOs are re-engineering AI threat detection and response by unifying visibility and defense across the entire AI attack surface. Three-quarters of organizations are currently working to automate their cybersecurity detection processes, according to EY research. They are applying AI-driven monitoring, automated response and enhanced threat intelligence to block prompt injections, sanitize outputs, redact sensitive data, mitigate denial-of-service attempts and contain overprivileged agents. This integrated approach helps organizations quickly detect, respond and adapt to malicious activity targeting AI systems and their supply chains. The banking sector is especially advanced with agentic AI, with more than half of executives saying agentic AI systems are highly capable of improving cybersecurity, according to an MIT Technology Review study in association with EY.


    Risks mitigated:

    • Cyberattacks on AI systems and AI supply chains: CISOs are increasingly aware of higher cybersecurity risk with AI systems. For instance, 76% of organizations that use AI in their audits perceive higher cyber risk.8
    • Excessive agency or inappropriate output from AI agents

    4. Mitigate AI supply chain threats

    The interconnected nature of AI development demands organizations to build strong third-party risk management and attack surface visibility. CISOs are mitigating AI supply chain threats by implementing transparency, visibility and minimum-security standards across third-party providers and AI components. They are strengthening asset management, applying rigorous third-party risk controls and using cryptographic verification of models to reduce hidden dependencies and vulnerabilities introduced through external AI software.

    Risks mitigated:

    • Complexity, hidden dependencies and additional vulnerabilities: The threat is real — 61% of companies experienced a third-party breach in the past year.10

    5. Harden AI systems

    Organizations are working with their CISOs to harden AI systems by embedding security throughout the development and deployment lifecycle — from design to operation. They recognize the criticality of this step: 83% of leaders say AI adoption would be faster if they had stronger data infrastructure in place, according to EY research (via ey.com US). “Integrating the right security controls into an AI deployment and hardening AI systems helps the cybersecurity team set the tone for the entire organization, establishing the team as a role model for implementing responsible AI,” said Dan Mellen, EY Global Cyber Chief Technology Officer.

    As a result, organizations are integrating secure coding and model governance into machine learning operations (MLOps), applying adversarial testing and red teaming, and enforcing strong configuration, segmentation and vulnerability management standards. This approach reduces errors and misconfigurations, protects against infrastructure-level weaknesses, and helps ensure AI models and agents are deployed on resilient foundations.


    Risks mitigated:

    • Errors, misconfigurations and vulnerable code from fast-paced development and lack of expertise: Cloud and AI misconfigurations are exceptionally common, with 98.6% of organizations stating they have critical cloud misconfigurations.
    • Underlying infrastructure vulnerabilities: Weak infrastructure is both a risk and a hinderance to the AI rollout — according to EY research (via ey.com US), 67% of leaders say that inadequate infrastructure is actively holding back their AI efforts.

    Taken together, this set of cybersecurity guardrails will help CISOs both secure the AI rollout across the enterprise and generate more value from the cybersecurity function. By focusing guardrail investments on clear value-driving areas, CISOs can promote their function within the organization and rapidly enhance their cybersecurity capabilities to keep pace with the NAVI world.

    AnnMarie Pino, Associate Director, Ernst & Young LLP; William Reid, Assistant Director, Ernst & Young LLP; and Joe Morecroft, Associate Director, EYGS LLP, contributed to this article.


    Summary

    While organizations develop internal AI programs and partner with AI providers, the cybersphere is becoming increasingly nonlinear, accelerated, volatile and interconnected. CISOs should develop cybersecurity guardrails to help secure AI adoption across the enterprise.

    Related articles

    How can responsible AI bridge the gap between investment and impact?

    Explore the ways in which responsible AI converts investment into meaningful impact.

    What if disruption isn't the challenge, but the chance?

    Transform your business and thrive in the NAVI world of nonlinear, accelerated, volatile and interconnected change. Discover how.

    How can cybersecurity go beyond value protection to value creation?

    The 2025 EY Global Cybersecurity Leadership Insights Study found that CISOs account for US$36m of each strategic initiative they are involved in. Read more.

      About this article

      Authors