EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Limited, each of which is a separate legal entity. Ernst & Young Limited is a Swiss company with registered seats in Switzerland providing services to clients in Switzerland.
How EY can help
-
The EU AI Act will be adopted shortly with far-reaching extraterritorial impact. As it is often more costly and complex to ensure compliance when AI systems are operating than during development, we recommend that firms start preparing now with a professional AI Act readiness assessment and early adaptation.
Read more -
Our analytics, compliance, and legal professionals can help your business achieve trustworthy, safe and compliant use of generative AI. Find out more.
Read more -
Our legal, compliance and analytics professionals can help you ensure trust in your AI solution lifecycle with critical governance and control elements.
Read more
Enabling innovation built on trustworthy AI is an ongoing journey rather than a one-time initiative. A well-defined AI risk management governance framework serves as the foundation for establishing clear priorities for a sustainable and accountable rollout of AI technologies. By identifying risks at an early stage and aligning them with appropriate controls, organizations can bolster stakeholders’ trust, proactively address evolving regulatory requirements and ensure the resilience of their AI capabilities over time.
During the development of our AI Lifecycle Blueprint, presented above, we identified specific gaps in key activities and roles that relate directly to the use of AI and therefore require extensions to the existing AI governance framework. At the same time, our analysis revealed that most of the steering and control mechanisms required can build on processes already in place. While the building blocks serve as an integrative reference model, the AI Lifecycle Blueprint uncovers phase‑specific requirements across the entire lifecycle, enabling organizations to precisely locate residual risks and identify any related need for action.
To address the AI-specific gaps in key activities, roles and corresponding control mechanisms, organizations should leverage public reference papers. Key sources include the NIST AI Risk Management Framework, ISO/IEC 23894 (AI Risk Management), ISO/IEC 42001 (AI Management System), ISO/IEC 42005 (AI Impact Assessment) and the AI principles issued by the OECD and IOSCO. These frameworks effectively translate abstract risk categories into operational controls that can be seamlessly integrated into existing governance structures.
Many Swiss institutions already have ISO/IEC 27001 certification and have aligned their cybersecurity practices with the NIST Framework. They are therefore well positioned to take the next step, as the essential control instruments are largely in place – they simply require enhancement with AI-specific elements to ensure comprehensive risk management.