How to turn AI into a catalyst for innovation in cybersecurity

As cybercriminals leverage AI to attack enterprise infrastructure, cybersecurity teams need to step up and adapt AI to strengthen defenses.


In brief

  • While AI offers significant advantages in enhancing operational efficiency and threat detection in cybersecurity, it also presents new vulnerabilities.
  • Organizations must integrate AI into their cybersecurity frameworks responsibly, building in strong human oversight and a robust governance structure.
  • Cybersecurity teams need to strive to become model citizens as they embrace AI and use it as a springboard to transform the enterprise.

Convincing the Chief Information Security Officer (CISO) of XYZ company to approve a major artificial intelligence (AI) deployment involving a new agentic AI application was a challenge for the Chief AI officer. For six months, the new project delivered significant cost savings and efficiency improvements. Then, as he was preparing a presentation for the board to discuss the benefits, the CISO alerted him to a major data breach caused by inadequate boundaries on the entitlements granted to the AI agents. The CAIO shelved the presentation, pivoting to a new one about potential responses to alerting major clients about the breach and the steps he and the CISO would take to revamp the agentic AI project to create stronger cybersecurity measures.

Rapid advances in AI deployments are revolutionizing the way companies design and architect enterprise systems to enhance operational efficiency, generate new revenue streams and foster innovation across the enterprise. But these same advances have also created new attack vectors for cybercriminals, who are proving increasingly adept at leveraging AI to automate attacks, create sophisticated phishing schemes and bypass traditional security protocols.

Even more, the expansion of agentic AI solutions, which autonomously make decisions and take actions based on learned experiences and honed context setting, raises the possibility that cybercriminals might be able to infiltrate those teams, which often have improperly governed access to key enterprise systems and assets. These types of attacks are typically more difficult to detect and often require extensive clean up to mitigate and involve costly remediation.

The new cybersecurity threat landscape has made it even more imperative that organizations stay vigilant and adapt their cybersecurity strategies accordingly. Fortunately, just as AI has amplified the threat, it also provides organizations with a tool to find innovative ways to prevent or mitigate the impact of future attacks. Cybersecurity teams that embrace and implement AI will position themselves as leaders in advancing technology and also serve as a model for other industries and parts of their own organizations. Many CISOs are confident that their teams are ready to assume this leadership role. A recent EY survey found that 90% of CISOs were optimistic that AI can positively transform their organization’s cybersecurity strategy and preparedness.

Cybersecurity landscape in 2025

At the same time, CISOs were almost more likely to express concern their organization is underestimating the dangers of cybersecurity threats — 66% compared with 56% for other members of the C-suite. For many CISOs, who are more aware of the cybersecurity risks related to AI adoption, it’s a matter of when and how — rather than if — their organization will experience a cybersecurity incident.

And the threat is real. Cybercriminals have not wasted any time in using generative AI to scale and personalize malicious activities. In just the past year, phishing attacks powered by GenAI have increased by more than 1,200%1 and are becoming increasingly difficult to detect. Research found that in 2023, emails written by AI were 31% less effective than humans2. By 2025, phishing emails generated by GenAI were 24% more effective than those written by humans, increasing the likelihood that recipients could be tricked into opening them and clicking on a link.

And it’s not hard to see why. Sophisticated phishing attempts now include deepfake-enabled emails, voicemails and even interactive live meetings alleging to be from the CEO where employees are instructed to transfer funds into a specific banking account. Cybercriminals have also developed adaptive malware that targets food production, supply chains and consumer data.

The rapidly evolving nature of the threat posed by AI has prompted some organizations to consider avoiding the use of AI to protect against cyberattacks given the potential for allowing bad actors to gain access to their systems. In addition, AI-based cybersecurity tools rely on high-quality, up-to-date training data, which is hard to access when new threats emerge almost daily. Biased or outdated data could result in false or missed detections, which fail to catch fast evolving threats. Moreover, AI decision-making can lack transparency, which makes it difficult to understand why certain actions were taken.

Importance of AI as a driver of innovation

Still, as our survey found, CISOs as well as other members of the C-suite recognize that AI needs to play a critical role in their cybersecurity strategy. Many organizations are tapping AI’s potential to streamline security procedures and slash response times. For instance, some companies are leveraging machine learning to detect anomalies in data and transactions in real time. Once threats are identified, companies can use AI to autonomously respond to new threats, block malicious traffic, isolate compromised devices and automatically send alerts throughout the enterprise.

In addition to improving response time, organizations can also use AI to learn from past incidents and detect new and evolving threats. At the same time, AI can also help organizations remove the element of human error that could compromise routine security tasks, such as monitoring network traffic for suspicious spikes in activity. Moreover, by automating many of the more repetitive tasks necessary for protecting the enterprise, AI allows cybersecurity professionals to focus on more complex strategic tasks, augmenting their problem-solving capabilities and compensating for staff shortages in the cybersecurity profession.

Integrating the right security controls into an AI deployment enables the cybersecurity team to set the tone for the entire organization, establishing themselves as a role model for implementing responsible AI.

Competitive advantages through secure AI adoption (responsible AI)

Given the stakes and the increasing threat posed by cybercriminals adapting AI to amplify their attacks on organizations, companies face an urgent need to demonstrate their capability to deploy secure and responsible AI countermeasures. Moreover, while AI can enhance detection and improve response times, its vulnerabilities, complexity and the rapidly advancing AI-powered attack techniques make it a problematic sole solution against cyberattacks. To be blunt, AI needs to be paired with strong human oversight that enforces broader security strategies that harden the enterprise against cyberattacks.

That requires a smart, tactical approach to countering cyberthreats — one that does not simply adopt the latest AI solutions. As a first step, many organizations need to tackle the proliferation of disconnected cybersecurity tools — often acquired reactively to address a specific problem. This can help organizations streamline and consolidate AI tools into unified platforms that will streamline operations, reduce redundancy and improve threat detection.

An effective AI response to cyberthreats must be integrated into a secure, scalable and responsible AI program — one that emphasizes a major skills uplift for the cyber team and the entire organization. Any sustainable AI program should be based on the following foundational elements:

  1. Establish a cross-functional AI governance framework: Align AI initiatives with business goals, regulatory requirements and ethical standards.
  2. Secure the data supply chain: Confirm the integrity, privacy and provenance of data used for AI training and inference.
  3. Build a secure AI development and deployment environment: Protect models, infrastructure and interfaces from cyberthreats.
  4. Leverage AI for cybersecurity operations: Enhance threat detection, response and resilience using AI capabilities.
  5. Start with high-impact, low-risk use cases: Accelerating adoption of agentic AI for cybersecurity starts with identifying the right use cases to find the balance between risk, speed and value and prioritizing the right risks across three categories — low, high and extreme.

Conclusion

Even though cybercriminals are rapidly developing new AI techniques, organizations still have time to recover and win the arms race. They can do this by adopting a responsible AI approach as the cornerstone of an enterprise-wide cyber prevention strategy. This approach requires human oversight to be integrated into the entire workstream, providing critical checkpoints for course correction and confirming that system protect data integrity for customers.

By taking these steps, organizations can build sustainable trust with customers and stakeholders and demonstrate that they protect the confidentiality, integrity and availability of data used in AI systems. This is a clear call to use AI in cybersecurity as a way to seize and maintain a competitive edge. Organizations that embrace this approach could go a long way toward making the cyber team the model citizen for adopting responsible AI in a way that transforms the enterprise.

Summary

As organizations adapt to a transformed cybersecurity threat landscape, they are turning to AI as a tool for preventing, detecting or mitigating the impact future attacks. Cybersecurity teams that successfully embrace and implement AI are taking a huge step toward positioning themselves as model citizens for how the enterprise can successfully deploy AI safely and responsibly.

About this article

Authors

Related articles

AI use case management

Uncover the importance of strategic alignment and risk management in AI use case development, paving the way for responsible and successful AI integration.

Cyber study: How the C-suite disconnect is leaving organizations exposed

A 2025 EY study shows a consensus on the importance of cybersecurity among executives and a correlation between share price declines and cyber breaches.

4 pillars of a responsible AI strategy

Corporate AI adoption is surging amid genAI advancements. Establishing responsible AI policies is crucial to mitigate risks and ensure compliance.