The path forward: governing AI with insight and integrity

The path forward: governing AI with insight and integrity

This article is authored by:

Yvonne Zhu, EY Canada Partner Assurance, Technology Risk, ASU - TR - Technology Risk
Karelyn Murray, Senior Manager, EY Canada
Abhishek Chowdhury, Senior Manager, EY Canada

Canadian boards can drive trust and innovation by prioritizing responsible AI governance and aligning it with strategic goals.


In brief

  • Responsible AI governance helps Canadian boards manage risk while unlocking innovation and competitive advantage.
  • Boards must align AI use with strategy, build trust, and foster a culture of ethical, transparent AI adoption.
  • Prioritizing responsible AI enables sustainable growth and prepares organizations for evolving regulatory landscapes.

Canadian boards that address AI as both a strategic asset and systemic risk can empower organizations to build trust, transparency and a competitive advantage.

 

Boards can help organizations enable responsible AI, tapping into immense possibility

The EY AI Sentiment Index Study shows 82% of people are already using AI to improve how they live and work. Some of the most promising AI applications align with areas where businesses are actively developing solutions

 

From AI-driven financial wellness to symptom diagnosis in health care and easier access to customer support: opportunities abound. 

 

Still, only 57% of survey respondents say they’re comfortable with AI. Much of that discomfort comes down to trust. How so?

 

Trust is foundational for humans, influencing decisions, behaviours and actions. In the AI context, systems must earn and maintain people’s trust through principles including transparency, fairness, accountability and reliability to unlock AI’s true potential. In essence, responsible AI isn’t just about building intelligent systems; it also means designing, developing and deploying AI in a way that generates and earns human trust in alignment with human values and societal norms. 

 

Absent these factors, meaningful gaps can emerge and negatively impact AI adoption, trust, engagement and the transformative value this technology has come to represent for Canadian businesses. 

 

By addressing fears around misinformation, bias and privacy, organizations can go beyond mitigating risk to employing AI as a catalyst for human ingenuity, imagination and progress, not to mention sustainable AI value creation. This becomes a licence to lead and improves workplace culture, shown in the EY 2024 Work Reimagined Survey to account for 40% of an organization’s health score

For instance, using AI to automate manual tasks frees people up to focus on engaging, higher-value work. This strengthens culture and sparks new ways for people to learn, futureproof their careers and spark innovative business growth. 

Inaction not only mutes those opportunities; it can generate a tremendous amount of risk, chip away at stakeholder trust and lead to costly outcomes. What’s more: boards that are tempted to wait and see what transpires on the regulatory front could deepen those pitfalls. 

Although Canada was in many senses a first mover on AI regulation — including the proposed Artificial Intelligence and Data Act tabled in 2022 — regulatory momentum on the domestic front has slowed while international players have forged ahead. All the while, AI continues to shape shift, converging with other emerging technologies, such as quantum computing, blockchain and the Internet of Things. This evolution continues to present boards with new AI-related priorities beyond regulatory compliance. Left ungoverned, AI erodes value and creates a host of risks, including reputational, operational, cybersecurity, privacy, legal and ethical risks. 

As corporate stewards, boards that embrace the dual lens of opportunity and oversight can play an integral role in building enduring trust, fulfilling a fiduciary duty to prevent inaction and creating a competitive advantage in an AI-driven economy.

Boards must lead with vision, vigilance and a sense of responsibility

Closing the trust gap and unleashing responsible AI’s transformative value requires organizations to lead with vision, vigilance and a sense of responsibility. That means positioning responsible AI as a priority much more significant than an item on a compliance checklist. 

Aligning AI use with organizational strategy and user expectations is foundational to making the most of this opportunity. An effective approach to responsible AI enables businesses to roll out this evolving technology at scale, increase return on investment, support business results and create value.

Four leading practices to initiate richer AI discussions and prioritize responsible AI at the board level

Boards should plan for the continued evolution of technology, helping refocus the business to go after the appropriate strategic priorities with ROI and scale in mind. Now is the time for boards to ask C-suite executives meaningful questions about AI and to critically examine what’s been done to build AI trust and confidence internally and externally, and what must happen next.

Keeping these four leading practices in mind can help boards elevate responsible AI on the agenda and make progress now:

1. Prioritize AI as an organization-wide, cross-functional boardroom imperative. 

Giving AI permanent space on the board’s agenda is critical. This reinforces that AI is not a standalone, one-and-done, back-office concern. Rather, it’s a strategic value driver that must be deployed, used and monitored as part of a connected plan to mitigate risk and realize value. 

This includes establishing a governance framework that integrates AI considerations into decision-making processes at the board level. The conversation should focus on how to create cross-functional teams that can assess AI’s impact on various business areas so AI investments support the organization’s long-term goals and risk management strategies to drive sustainable value. 

2. Understand the current and desired future state of AI adoption.

Boards need a clear understanding of where and how AI is being deployed now, as well as the desired future state. They must also feel confident stress testing potential embedded or shadow AI usage. Through this process, the board can help define the organization’s risk appetite, setting the tone and rigour for AI governance overall and developing a roadmap for responsible AI. 

The EY.ai Maturity Model is designed to help an organization visualize its current AI maturity across seven dimensions and to provide recommended actions to get to the next level. An AI governance maturity assessment and AI data readiness assessment can assist organizations in identifying gaps, scaling and optimizing the AI governance journey to align with the organization’s AI adoption maturity level. With this insight, boards can not only benchmark against standards and frameworks, but also establish that the AI governance approach in place is fit for purpose. 

3. Foster a culture of responsible AI.

Tone from the top and grassroots awareness are equally important. At the board level, directors should be educated on emerging trends and become AI literate. Boards play critical leadership and oversight roles, challenging beliefs about what rigorous looks like in the digital age. Regular tabletop reviews and simulations can help the board and C-suite prepare for AI incidents and crisis response. Then, people across the organization will need upskilling opportunities to build and improve AI literacy and risk awareness. This can be achieved through broad-based training programs that cover AI fundamentals, applications and associated risks, enabling employees to engage confidently with AI technologies and navigate the complexities of AI responsibly and effectively.

4. Explore additional applications, like agentic AI, as part of a continuous improvement and innovation journey.

Additional AI applications are gaining momentum and beginning to appear within open-source tools. The real interplay between humans and technology is becoming increasingly obvious as the level of AI autonomy increases. As AI technologies continue to create new opportunities and introduce additional risks, existing regulations will undergo changes, and new regulations will emerge. This dynamic environment means your responsible AI approach and broader technology strategy must be flexible and adaptable to keep pace with these rapid changes and the organization’s business strategy. 

Incorporating benchmarking into your plans helps identify weak spots. More important, maintaining an ongoing dialogue about responsible AI will keep it a priority as the organization navigates its transformation journey. 

Summary

Boards must actively engage in the oversight of responsible AI initiatives to fully realize its value in the organization. This includes promoting a culture of responsible and ethical practices in AI adoption and usage, aligning AI strategy with organizational objectives and maintaining transparent communication with stakeholders. By doing so, boards can not only manage risks but also harness AI as a significant driver for innovation and competitive advantage, ultimately fostering sustainable growth in an increasingly AI-driven economy. 

Curious about responsible AI?

Related articles

How responsible AI can unlock your competitive edge

Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.

Why cyber risk management matters for financial resilience

By embedding cyber risk management into financial planning, CFOs can enhance their organization’s cyber resilience. Read more.

Cybersecurity oversight disclosures: what companies shared in 2024

See how cybersecurity-related disclosures have changed since 2018 and what boards are doing to enhance cybersecurity risk oversight.

    About this article