Man with baseball cap on AI facial recognition data scan

When deepfake dangers cause real crises

Organizations can manage AI risks by developing robust governance frameworks — and using AI to detect signs of trouble.


In brief
  • As AI threats continue to evolve, organizations must proactively address deepfakes and other risks.
  • Crisis management is critical; implementing rapid-response strategies and training employees can help companies navigate AI-related challenges effectively.
  • Investing in advanced detection tools and fostering a culture of verification are key to staying ahead of emerging digital threats.

As employees arrive at ABC Company headquarters one morning, a shocking video begins to circulate. It appears to show the CEO making libelous statements during a live-streamed interview, announcing the organization is cutting ties with a key supplier, claiming their practices are unethical.

None of it is true.

A crisis management team gathers and scrambles to issue statements, respond to calls for boycotts and address investor concerns over falling stock prices. The company’s reputation is further damaged by social media narratives. Although the video is confirmed as a deepfake, public perception has shifted. Restoring trust will require ongoing communication. Some now consider all communications from ABC Company suspect.

The incident highlights the urgent need for organizations to adapt to the evolving threat landscape posed by artificial intelligence (AI). “Organizations are facing a new dimension of technology-enabled threats, including deepfakes, impersonations and other AI-driven risks. We have rapidly seen an evolution from these being perpetrated by one-off individuals with ill intent to highly organized and well-resourced criminal organizations,” said Brian Wolfe, Managing Director, Forensic & Integrity Services, Ernst & Young LLP (EY US).

Technology has advanced to the point where videos, images and audio created by AI look and sound incredibly real. Threat actors can also use AI for voice cloning and social engineering to impersonate trusted individuals and infiltrate an environment through phishing. Beyond damage to IT systems, intellectual property and reputation, leaders are concerned about threats to plant security, operational and industrial controls, information privacy, finances and potential business interruption.

 

The rapid evolution of technology requires companies to anticipate potential crises and develop strategies to address them before they escalate.

 

Organizations should adopt a risk-based approach that incorporates multiple layers of defense. For example, there are several emergent technologies that can use AI in real time to validate the authenticity of an individual on a video conference call. These technologies involve pre-validating individuals by capturing their key biometrics and comparing them in real time to the image on the screen. An alert would appear in the event of a potential deepfake. While it is impractical to subject all routine internal video conference calls to such procedures, they may make great sense if the context of the discussion is sensitive, such as decision-making on M&A-related matters.

 

Employees should also know the warning signs and understand that the organization supports their efforts to verify suspicious requests. Organizations should foster a culture that encourages questioning or verifying requests that violate typical policies and protocols — for example, a deepfake of the CIO allegedly instructing an IT staffer to send the company’s most sensitive algorithms to a suspicious email address, no matter how urgent the situation is.

 

“We are seeing AI-enabled deepfake efforts becoming more surgical and continuous. These are coordinated efforts stretching over weeks and months, with the victim becoming increasingly trusting of the deepfake and giving up sensitive information. With this in mind, AI can in fact help to measure risk. Organizations can now deploy agentic AI capabilities to trigger associated follow-up actions, including verifications and notification procedures,” said Jeremy Osinski, Forensic & Integrity Services EY US AI Leader.

Preparedness is key

Traditional crisis management protocols are often inadequate in addressing the rapid pace of AI-related threats. The time to act has been shortened from days and hours to moments. Legal, HR and PR decisions must be made ahead of time. Without preestablished protocols and strategy, confusion only adds to the chaos.

“You have to keep on top of the narrative now or you have lost it,” said Wolfe.

AI can streamline threat modeling and contingency processes, enhancing incident response planning. All these steps can help companies get ahead of the narrative rather than play catch-up in the public eye.

Leading organizations are also investing in AI tools for threat detection and response and learning to use AI as an ally in fighting cybercrime. For example, some organizations are leveraging AI to help verify online content and detect sophisticated fraud schemes.

Comprehensive crisis management

Proactive threat intelligence is vital for early risk identification and prevention, and it’s crucial to have a security program that integrates multiple facets.

A well-designed program includes several key components: threat intelligence, physical and travel security, cybersecurity training, digital footprint analysis, event planning, social media monitoring, personnel vetting and crisis management. These elements work together to drive their safety and align with law enforcement and overall risk strategies.

Key actions to take

If your organization becomes the victim of a deepfake attack, are you prepared for what happens next? Have you run through a tabletop scenario to discuss key decisions, actions and communications?

To prepare your organization for an effective response, consider the following important strategies:

Summary 

As the threat landscape rapidly evolves with the rise of AI technologies, organizations must act decisively to develop robust crisis management frameworks. By investing in advanced detection tools, establishing rapid-response protocols and fostering a culture of awareness and preparedness, companies can better navigate the complexities of AI-related crises. The integration of AI into crisis management enhances detection capabilities while empowering organizations to respond swiftly and effectively, ultimately safeguarding their reputation and strengthening operational resilience. As we move forward, embracing these strategies will be crucial in mitigating risks and maintaining trust in an increasingly digital world.

About this article

Authors

Related articles

Technology solutions drive efficiency and confidence in compliance

The Forensic & Integrity Pulse Series shows compliance leaders are deploying their digital approach with a focus on AI readiness. See the poll results.

Establishing practical AI governance for compliance and legal

Dos and don’ts of establishing AI governance frameworks that balance AI innovation with safety, reliability and legal standards. Read more.