EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
We provide technology-enabled solutions to help you transform at the federal, state and local levels.
Read more
Artificial intelligence (AI) is rapidly becoming the defining force in federal cybersecurity. In agencies across government, leaders are racing to harness its potential — more than 70% plan to expand AI use in the next one to three years. Yet as algorithms grow more powerful, so do the adversaries who exploit them. Threat actors are now using generative AI (GenAI) to craft convincing phishing campaigns and deepfakes, weaponizing automation to accelerate ransomware-as-a-service, and manipulating machine learning models through data poisoning and adversarial attacks. At the same time, federal systems face mounting risks from compromised machine identities and third-party AI components woven through the software supply chain.
The convergence of AI and cybersecurity is no longer theoretical; it is the new battleground for protecting national data, critical infrastructure, and public trust. But while the potential of AI is vast, the readiness gap is widening. Nearly two-thirds of agencies already use AI or machine learning (ML) tools in some cybersecurity capacity, yet many remain in pilot phases. Only one in four federal leaders is confident in their organization’s ability to manage AI-related cyber risks, and half cite a lack of internal technical expertise as a top barrier to progress. This white paper explores how agencies can navigate this new frontier — using AI to defend their missions while securing the intelligent systems that now underpin them.