Server room, laptop and technician people for software management, system upgrade or cyber security.

The dual imperative: AI and cybersecurity

Leaders who prioritize workforce capability, Responsible AI, and cross-agency collaboration will define the next era of secure innovation.


Artificial intelligence (AI) is rapidly becoming the defining force in federal cybersecurity. In agencies across government, leaders are racing to harness its potential — more than 70% plan to expand AI use in the next one to three years. Yet as algorithms grow more powerful, so do the adversaries who exploit them. Threat actors are now using generative AI (GenAI) to craft convincing phishing campaigns and deepfakes, weaponizing automation to accelerate ransomware-as-a-service, and manipulating machine learning models through data poisoning and adversarial attacks. At the same time, federal systems face mounting risks from compromised machine identities and third-party AI components woven through the software supply chain.

 

The convergence of AI and cybersecurity is no longer theoretical; it is the new battleground for protecting national data, critical infrastructure, and public trust. But while the potential of AI is vast, the readiness gap is widening. Nearly two-thirds of agencies already use AI or machine learning (ML) tools in some cybersecurity capacity, yet many remain in pilot phases. Only one in four federal leaders is confident in their organization’s ability to manage AI-related cyber risks, and half cite a lack of internal technical expertise as a top barrier to progress. This white paper explores how agencies can navigate this new frontier — using AI to defend their missions while securing the intelligent systems that now underpin them.

Summary 

AI must be both secured and secure. Both imperatives are essential for protecting federal missions, data and public trust. Yet neither can succeed without responsible human expertise. The survey data makes the challenge clear: the tools are advancing faster than the workforce. To close this gap, agencies must view expertise as the foundation of readiness. Technology can amplify capacity, but only skilled professionals can ensure AI is used ethically, securely and effectively. Federal leaders who prioritize workforce capability, Responsible AI, and cross-agency collaboration will not only protect against AI-driven threats — they will define the next era of secure innovation.

About this article

Authors

Related articles

Vulnerability exploits: a layered defense strategy

Explore how defense-in-depth and rapid vulnerability response reduce exploit risks and strengthen cybersecurity resilience.

Five steps for law enforcement to overcome AI-driven cyber threats

Questions are being asked and concerns are being raised about how AI could rapidly rewrite the cyber threat landscape.

Driving agency AI literacy utilizing guardrails and frameworks

Recommendations government agencies should consider when driving AI literacy.