Female photographer looking at prototype

How can businesses build AI-driven innovation based on trust?


Uplift current core processes to enable innovation driven by responsible artificial intelligence.


In brief

  • Close the gaps: quantify residual AI risks, test whether existing controls cover them and assess whether your organization is ready to scale AI.
  • Trust over speed – uplift, don’t rebuild. Expand current governance, risk and control frameworks with AI-specific enhancements instead of creating parallel structures.
  • Anchor controls in recognized standards: align with NIST AI Risk Management Framework, ISO/IEC 23894, ISO/IEC 42001, ISO/IEC 42005 and OECD/IOSCO principles.

Artificial intelligence is advancing from pilot projects to full-scale productive use faster than any previous technology. Boards are eager to integrate AI into products, services and back-office operations, believing that early adopters will capture significant value. However, speed alone will not guarantee a return on investment. The true differentiator lies in responsible AI – scaling models while maintaining trust, safeguarding stakeholders and complying with increasingly stringent regulations.

1

Chapter 1

Introduction

How do organizations keep their AI both cutting-edge and responsible?

Many organizations say they endorse responsible AI, but platitudes don’t replace effective controls. Three critical gaps often persist:

  1. AI risks – Do we fully understand our residual risks and their business impact? Have we quantified them in our existing risk framework?
  2. AI controls – Which current business and process controls effectively address AI-related risks? Where can we make targeted enhancements instead of building new frameworks from scratch?
  3. Organizational readiness – Is AI embedded in our organizational DNA? Are our people, structures and budgets ready for today’s AI and scalable for tomorrow without adding unnecessary layers of complexity?

Addressing these gaps is essential if sustainable, AI-driven growth is to be achieved. The journey starts with uplifting the current governance framework that covers mandatory elements and enriching risk management with AI aspects, while fully building on existing risk categories/taxonomies and addressing the risks with existing, proven core processes and controls.

With this balanced approach, we believe businesses can now build their innovative strength on trustworthy AI – without any organization-wide heavy lifting or the creation of net-new elements.

58% of organizations’ leaders actively support the implementation and integration of AI technologies and initiatives only partially.
(Source: EY European AI Barometer 2025).


2

Chapter 2

Sustainable AI: the building blocks of responsible integration

How can organizations embed responsible AI practices in their existing governance framework?

To embed AI sustainably and responsibly, organizations must enhance their existing governance structures with a set of focused building blocks for responsible AI. We have distilled these into a practical guide designed to elevate corporate AI governance frameworks while prioritizing sustainability and responsibility. These building blocks are organized into three mutually reinforcing layers: strategy, organization and processes.

1. Start with strategy: set the north star and the guard rails

AI strategy

Successful AI integration begins with firmly anchoring each initiative to the organization’s strategic objectives, core values and risk appetite. This foundational step involves articulating a clear vision that defines the purpose and direction of all AI efforts, ensuring that they align with the organization’s broader goals. By establishing this clarity, organizations can effectively guide their AI initiatives toward meaningful and impactful outcomes.

Drivers of the AI journey / thought leadership group

To cultivate a thriving AI landscape, organizations must harness strategic foresight that anticipates emerging trends and potential risks. This forward-looking perspective not only fuels innovation but also ensures that senior leadership remains aligned on the direction and responsible adoption of AI initiatives. By fostering a shared executive vision, organizations can transform complex implementation challenges into coordinated, value-driven opportunities that maximize the benefits of AI.

AI governance model

Clear decision rights and ownership form the backbone of responsible AI governance. By proactively mapping roles, escalation paths and governance forums, organizations establish a transparent framework that guides all stakeholders. This structured approach fosters seamless cross-functional collaboration, enhances accountability and builds greater trust in each AI initiative – from conception to production. As a result, organizations can navigate the complexities of AI implementation with confidence and integrity.

Governance and oversight

a. Oversight and accountability (RACI, boards, committees)
Effective AI governance begins with clearly defined ownership. Establish who is responsible, accountable, consulted and informed (RACI) at every stage of the model lifecycle, and embed these assignments within the charters of dedicated AI boards and committees. By creating crystal-clear escalation paths, organizations ensure that human oversight is both traceable and defensible, reinforcing accountability throughout the process.

b. AI Ethics Board / external advisory group
Introduce an independent layer of scrutiny by forming an AI Ethics Board or engaging external advisory groups. These specialists provide valuable insights on high-risk or socially sensitive use cases, helping organizations navigate ethical grey areas and enhance public trust in their AI initiatives.

c. Compliance and ethics including regulatory alignment
It is essential to align all AI activities with relevant regulations, internal policies and societal expectations. Organizations should systematically address issues such as bias, transparency, sustainability and auditability, ensuring that every deployment can stand up to regulatory review and stakeholder scrutiny.

d. AI policy including code of conduct
Establish a comprehensive AI policy that aligns with the corporate code of conduct, clearly outlining the “rules of the road.” The policy should specify permissible data use, mandatory controls and individual responsibilities. A well-communicated policy not only deters misuse but also fosters a culture of accountability across the organization, promoting responsible AI practices at all levels.

Benefit realization

Maximizing the return on AI requires more than just optimistic business cases; it necessitates a disciplined ROI governance framework. Begin by securing upfront buy-in from key stakeholders and then track a balanced set of key performance indicators (KPIs) that reflect both value creation – such as revenue growth, cost efficiency and customer experience scores – and risk exposure, including model drift incidents and compliance findings.

Regularly review these metrics at each stage of the AI lifecycle – during the ideation, pilot, scaling and operational phases – to make informed decisions about redirecting funding or recalibrating models as necessary. This proactive, data-driven approach ensures that investment remains aligned with strategic goals, enhances accountability and guarantees that AI initiatives deliver sustainable growth rather than devolving into unchecked experimentation.

2. Equip the organization: people, lines of defense and tooling

To fully unlock AI’s potential, organizations must go beyond simply deploying advanced algorithms; they must empower their teams with the right tools, skills and safeguards. This involves integrating robust technical controls with a culture that prioritizes ethics and fosters stakeholder trust throughout every phase of implementation.

Risk and security management

a. AI risk management
Effective AI risk management involves the identification, assessment, prioritization and mitigation of AI-specific risks. By systematically addressing these risks, organizations can enhance the reliability and fairness of their AI systems, ultimately fostering greater trust among stakeholders.

b. AI security and threat protection
To protect AI systems from both internal and external threats, organizations must implement robust security measures. This includes deploying technical controls, enhancing cybersecurity protocols and ensuring preparedness for potential incidents. A comprehensive security strategy not only safeguards AI assets but also reinforces the integrity of the entire organization.

c. Third-party risk and vendor management
Organizations must apply consistent risk management and ethical standards in their external AI engagements, spanning procurement to operational use. By establishing clear guidelines and monitoring practices of third-party vendors, organizations can mitigate risks associated with outsourcing AI functions and ensure that ethical considerations are upheld throughout the engagement process.

Organizational enablement

a. Transforming AI from a pilot initiative to a pervasive force within an organization requires a culture that is literate, inclusive and ethically grounded. To enhance AI literacy, organizations should implement role-specific learning paths. Embedding change management is equally crucial. Organizations should integrate ethical checkpoints, incorporate diversity, equity and inclusion (DE&I) principles into model design and involve employees and customers in governance forums. This way, responsible AI norms become part of the daily workflows. Finally, measuring the cultural shift is essential. Organizations should track indicators such as training completion rates, ethics escalation trends and stakeholder trust scores.

b. Effective IT change management is crucial to maintain the integrity of AI systems. Teams must assess, authorize, implement and review changes in a controlled and coordinated manner. This structured approach minimizes disruption and ensures alignment with risk management, compliance requirements and business objectives. In tandem, robust data governance and quality management are essential for safeguarding AI data throughout its lifecycle. Organizations should ensure that data is secure, private, traceable and suitable for use. By prioritizing these practices, organizations can enhance the reliability of their AI systems and build trust among stakeholders, ultimately driving better business outcomes.

c. In tandem, robust data governance and quality management are essential for safeguarding AI data through its lifecycle. Organizations should ensure that data is secure, private, traceable and suitable for use. By prioritizing these practices, organizations can enhance the reliability of their AI systems and build trust among stakeholders, ultimately driving better business outcomes.

3. Operationalize through replicable processes

In today's rapidly evolving technological landscape, effective operational execution and lifecycle management are essential for the successful deployment and maintenance of AI systems.

Operational execution / lifecycle management

a. AI inventory
Organizations must maintain a comprehensive and up-to-date inventory of all AI systems, detailing their purpose, risk classification and ownership. For high-risk systems, this inventory should also include associated conformity assessments, documentation and logging requirements.

b. AI planning and design
Governance in AI planning is essential, guided by thorough impact and risk assessments, risk-based design principles, stakeholder alignment and structured evaluations of use cases. This proactive approach ensures that AI initiatives are aligned with organizational goals and risk tolerance.

c. AI development and testing
During the development and testing phases, organizations should focus on building and validating AI systems with documented designs that prioritize fairness, privacy and risk mitigation before deployment. This diligence helps to ensure that AI systems are robust and compliant with ethical standards.

d. AI deployment and controls
When it comes to deployment, it is vital to ensure a secure, transparent and controlled process. This includes oversight, necessary sign-offs and post-deployment safeguards to monitor the system’s performance and compliance.

e. Monitoring and performance tracking
Continuous monitoring is crucial for maintaining the integrity of AI systems. Organizations should utilize key performance indicators (KPIs), user feedback and risk indicators to detect issues such as bias, drift and performance degradation. Maintaining explainability and interpretability throughout this process enables responsible decision-making regarding system decommissioning when necessary.

f. AI incident management and business continuity management
Proactive AI incident management and business continuity planning are essential for organizational resilience. The way in which an organization prepares for, responds to and recovers from AI-related incidents and crises can be the factor that determines whether it is able to maintain trust and ensure operational continuity.

g. AI controls assurance & RCSA
To uphold the effectiveness of AI systems, organizations should periodically validate control effectiveness, fairness and bias mitigation through assessments and self-checks. This ongoing assurance process helps to identify areas for improvement.

h. Improvement loop
Finally, it is vital to institutionalize learning and improvement mechanisms for AI systems, models and controls over time. This improvement loop fosters a culture of continuous enhancement, ensuring that AI initiatives evolve in line with best practices and organizational objectives.

3

Chapter 3

Building trust in AI: a comprehensive lifecycle blueprint

How can organizations effectively navigate the AI lifecycle to ensure trustworthy and sustainable innovation?

AI is not yet ingrained in the DNA of every organization. Before advancing further along the innovation curve, leaders must consider: Is our current governance structure equipped to handle the level of AI we are currently utilizing, and can it scale effectively without producing unnecessary complexity?

We have developed an AI Lifecycle Blueprint that provides clarity on this critical question. It outlines each phase of an AI initiative, defining objectives, activities and key stakeholders to enable establishing the right level of governance to support today’s use cases while allowing for future growth and innovation.

Our AI Lifecycle Blueprint reveals new role and activity gaps introduced by AI, while affirming that most controls can leverage existing frameworks. The Building Blocks provide a comprehensive overview, while the Blueprint identifies phase-specific risks and necessary actions.

The AI Lifecycle Blueprint: objectives, activities and key stakeholders

1. Plan

During the planning phase, the primary objective is to establish a strong foundation for the project. This involves setting clear objectives, engaging key stakeholders and assessing both technical and data feasibility. Additionally, it is essential to identify potential risks and outline corresponding mitigation strategies, as well as to develop AI-specific governance and communication plans.

To achieve this, collaboration among business functions, risk and compliance experts and data/AI teams should begin from the outset.

2. Design

In this phase teams work to define the target architecture, map the workflows, prepare test data, select and document the AI model and develop risk mitigation measures. Privacy, security and explainability-by-design principles are integrated from the very beginning, while UX guidelines inform the creation of intuitive user interfaces. Success in this phase relies on close collaboration among IT change management, security and compliance, data and AI teams and relevant business functions.

3. Build

The building phase focuses on implementing and configuring the necessary infrastructure and system components. This includes developing system functionalities and checks, setting up and training the selected AI model and integrating security measures and risk mitigation strategies to ensure the model’s robustness. Additionally, this phase establishes continuous integration/continuous deployment (CI/CD) pipelines to facilitate efficient development and deployment processes. In this phase, key stakeholders include IT change management, security and compliance teams, data and AI specialists and relevant business functions.

4. Test

During the testing phase, it is essential to validate the system requirements and the security of the AI model. This validation process also extends to assessing the model’s performance and fairness. These steps ensure that the model meets standards for explainability, security and privacy. Additionally, the findings of the validation and assessment must be thoroughly documented, as a basis for informed go/no-go decisions. IT change management, data and AI teams, risk and compliance experts as well as the relevant business functions are involved in this phase.

5. Deploy

In this phase, secure deployment of the system is achieved by implementing robust monitoring and oversight measures. This includes establishing a formalized deployment plan, developing fallback strategies for AI and ensuring that monitoring, traceability and logging mechanisms are in place. Additionally, compliance measures must be implemented, and a final security review is essential to confirm that all protocols are met and that the AI model operates securely and effectively. Here IT operations, data and AI teams, security and compliance experts as well as the relevant business functions are involved.

6. Monitor, improve and maintain

The final phase focuses on the continuous monitoring and enhancement of the AI system’s performance, stability and drift. In the event of an incident, the organization must respond promptly and implement necessary improvement measures. Additionally, compliance must be maintained through regular reviews, and user support training should be provided to ensure effective utilization of the system. Moreover, fostering adoption of AI is crucial, and conducting bias audits will help identify and mitigate any unintended biases in the system. The key stakeholders that must be involved in the last phase are IT operations, data and AI teams, security and compliance experts, the relevant business functions and AI governance and ethics teams.

4

Chapter 4

Seamless integration of trustworthy AI: embedding responsible AI into existing processes and controls

How can organizations seamlessly weave AI-specific governance into today’s processes – without slowing innovation?

Enabling innovation built on trustworthy AI is an ongoing journey rather than a one-time initiative. A well-defined AI risk management governance framework serves as the foundation for establishing clear priorities for a sustainable and accountable rollout of AI technologies. By identifying risks at an early stage and aligning them with appropriate controls, organizations can bolster stakeholders’ trust, proactively address evolving regulatory requirements and ensure the resilience of their AI capabilities over time.
 

During the development of our AI Lifecycle Blueprint, presented above, we identified specific gaps in key activities and roles that relate directly to the use of AI and therefore require extensions to the existing AI governance framework. At the same time, our analysis revealed that most of the steering and control mechanisms required can build on processes already in place. While the building blocks serve as an integrative reference model, the AI Lifecycle Blueprint uncovers phase‑specific requirements across the entire lifecycle, enabling organizations to precisely locate residual risks and identify any related need for action.
 

To address the AI-specific gaps in key activities, roles and corresponding control mechanisms, organizations should leverage public reference papers. Key sources include the NIST AI Risk Management Framework, ISO/IEC 23894 (AI Risk Management), ISO/IEC 42001 (AI Management System), ISO/IEC 42005 (AI Impact Assessment) and the AI principles issued by the OECD and IOSCO. These frameworks effectively translate abstract risk categories into operational controls that can be seamlessly integrated into existing governance structures.
 

Many Swiss institutions already have ISO/IEC 27001 certification and have aligned their cybersecurity practices with the NIST Framework. They are therefore well positioned to take the next step, as the essential control instruments are largely in place – they simply require enhancement with AI-specific elements to ensure comprehensive risk management.

Summary

Trust – not speed – drives lasting AI value. After all, sustainable returns come from responsible AI governance that ensures user protection and regulatory compliance. Organizations must identify gaps in relation to AI risks, controls and readiness, and enhance existing frameworks rather than rebuilding them. By applying responsible AI building blocks across strategic, organizational and process layers and using a phased AI Lifecycle Blueprint, organizations can integrate AI-specific risk management to enable faster and safer scaling while protecting stakeholder trust and delivering sustainable ROI.

Acknowledgement

Many thanks to Melvin Carmona, Robin Bechtiger and Maximilian Zihlmann for their valuable contributions to this article.


Explore how EY can help you with AI

Explore our insights that look at how you can build confidence in AI, drive exponential value throughout your organization and deliver positive human impact.

Brazil town from sky

Upcoming webcast

How to Drive AI Innovation with Confidence — Now.

Join our upcoming webcast. Don’t miss this opportunity to strengthen your AI foundation and innovate with confidence.

    Related articles

    Highlights from the 2024 EY European Financial Services AI Survey

    2024 EY survey reveals 90% of EU finance firms adopt AI with plans to up GenAI spend, facing training, ethics, and regulatory challenges ahead.

    How responsible AI can unlock your competitive edge

    Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.

    Financial services CIOs – building GenAI at scale while managing risk

    Financial services CIOs have a unique opportunity to lead the GenAI conversation and transform the enterprise.


      About this article

      Request for proposal (RFP) - exclusively for Switzerland

      |

      Submit your request now!