EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Forensic Investigations & Crisis Management Services team provides complete crisis and incident response solutions. Team of former law enforcement, forensic accountants.
Read more
Technology has advanced to the point where videos, images and audio created by AI look and sound incredibly real. Threat actors can also use AI for voice cloning and social engineering to impersonate trusted individuals and infiltrate an environment through phishing. Beyond damage to IT systems, intellectual property and reputation, leaders are concerned about threats to plant security, operational and industrial controls, information privacy, finances and potential business interruption.
The rapid evolution of technology requires companies to anticipate potential crises and develop strategies to address them before they escalate.
Organizations should adopt a risk-based approach that incorporates multiple layers of defense. For example, there are several emergent technologies that can use AI in real time to validate the authenticity of an individual on a video conference call. These technologies involve pre-validating individuals by capturing their key biometrics and comparing them in real time to the image on the screen. An alert would appear in the event of a potential deepfake. While it is impractical to subject all routine internal video conference calls to such procedures, they may make great sense if the context of the discussion is sensitive, such as decision-making on M&A-related matters.
Employees should also know the warning signs and understand that the organization supports their efforts to verify suspicious requests. Organizations should foster a culture that encourages questioning or verifying requests that violate typical policies and protocols — for example, a deepfake of the CIO allegedly instructing an IT staffer to send the company’s most sensitive algorithms to a suspicious email address, no matter how urgent the situation is.
“We are seeing AI-enabled deepfake efforts becoming more surgical and continuous. These are coordinated efforts stretching over weeks and months, with the victim becoming increasingly trusting of the deepfake and giving up sensitive information. With this in mind, AI can in fact help to measure risk. Organizations can now deploy agentic AI capabilities to trigger associated follow-up actions, including verifications and notification procedures,” said Jeremy Osinski, Forensic & Integrity Services EY US AI Leader.