EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
The changing nature of cyber threats
Cyber threats are progressing through a significant shift. Earlier attacks relied heavily on the skill of human operators, manual reconnaissance and phased exploitation. Intrusions required time to plan and execute.
Today’s attackers use AI inside live environments. They help craft malicious code, refine payloads, generate synthetic communications and troubleshoot technical barriers. Public threat intelligence has already confirmed that attackers use AI tools to support decisions once they are inside a network. Malware has been observed calling out to LLMs to generate evasive commands in real time.
Tomorrow’s threat landscape is likely to involve semi-autonomous intrusion ecosystems that operate continuously across cloud, identity, data and application layers. These systems may test defenses, shift tactics based on detection and remain persistent without continuous human oversight. AI compresses the intrusion lifecycle into a rapid, adaptive sequence, reducing the distinction between reconnaissance, exploitation and persistence.
In a NAVI environment, this pace and interdependence requires new cyber resilience approaches. Traditional assumptions that threats move predictably or linearly are no longer valid.
Threats to AI and threats through AI
As organizations adopt AI to streamline operations and improve decision-making, they face two categories of risk that influence both security posture and enterprise resilience:
- Threats to AI arise when attackers target the data, models and infrastructure that power AI. Compromised training data can distort model behavior. Theft of proprietary models undermines competitive advantage and may reveal sensitive information. Attacks on the underlying environment, including data pipelines and machine learning operations (MLOps) workflows, can degrade performance or introduce malicious logic. Even subtle manipulations can affect how a model interprets inputs which influences downstream business decisions.
- Threats through AI occur when attackers exploit AI systems as tools or enablers. AI with elevated privileges can be manipulated into actions that users did not intend. Attackers can use AI to analyze environments, identify weaknesses, generate exploit paths and refine attacks while inside an organization’s network. Synthetic audio, video and text make social engineering more effective. Public facing AI applications may reveal sensitive information if probed in certain ways. As attackers automate these capabilities, they reduce the time and skill needed to run sophisticated campaigns.
The two threat categories are interdependent. Weak controls around AI create opportunities for misuse, while attackers who use AI increase the pressure on organizations to secure their own models, data and workflows. Resilience requires a holistic view of the entire AI ecosystem. These risks are amplified when baseline security practices are inconsistent. Applying zero trust principles to development processes, data environments and production systems limits the impact of a compromise and reduces the opportunities for attackers to manipulate or misuse AI assets.