How can you make the AI of today ready for the regulation of tomorrow?

Authors
Harvey Lewis

Partner, Client Technology & Innovation, Ernst & Young LLP

Chief Data Scientist for Tax, designing and developing AI systems for tax and law professionals. Honorary Senior Visiting Fellow at the Bayes Business School, City, University of London.

Sofia Ihsan

EY Global Responsible AI UK&I Consulting Leader

Trusted advisor. Passionate problem solver for EY clients. Constant learner. Mother of three amazing children. Music lover.

Mira Pijselman

Senior Consultant, Technology Risk, Ernst & Young LLP

Digital ethicist, researcher, consultant. Unlocking technology’s value through responsible innovation. Dedicated to empowering the next generation of socio-technical talent.

Laura Henchoz

EY UK Client Technology Markets Leader; Director, Consulting, Ernst & Young LLP

Commercially minded product marketing leader. Driven by challenge. Fitness enthusiast. Torchbearer for the PyeongChang Olympics 2018.

13 minute read 22 Aug 2023

Balancing generative AI’s potential with its risk and regulatory complexities requires a flexible and principles-based approach.

In brief
  • For businesses navigating the complexities of generative AI and regulation, a principles-based approach offers a flexible way to manage risk and foster trust.
  • Embracing a trusted AI framework empowers businesses to harness the power of AI, benefiting stakeholders and shaping a more equitable and sustainable future.

For over a decade, companies have drawn on the capabilities of artificial intelligence (AI) in myriad narrow use cases—including customer service chatbots, financial fraud detection and personalised e-commerce recommendations. As AI systems continue to engage with employees, customers and the public across various sectors, it is essential that they embody the same ethics and values expected of the organisations and people for whom they work. Trust and positive customer experiences hinge on the successful implementation of AI ethics. 

In recent months, new generative AI technologies and foundational models have been making headlines, such as OpenAI's GPT-4.1 These innovations unlock a vast range of transformative use cases, creating opportunities for organisations whilst also presenting fresh challenges. Governments worldwide are taking notice, as illustrated by the European Parliament's draft of the EU Artificial Intelligence Act and the UK's pro-innovation white paper on AI regulation, both of which have taken these new foundational models into account.2,3

  • What is generative AI?

    OpenAI’s ChatGPT defines generative AI as a subset of artificial intelligence capable of creating new content, ideas or solutions by using advanced algorithms and deep learning techniques. Unlike traditional AI systems that focus on analysis, prediction or classification, generative AI systems have the unique capability to synthesise novel outputs.

    Some prominent examples of generative AI include:

    • Text generation: OpenAI's GPT-4, a state-of-the-art large language model, generates text in response to prompts and can even draft full articles, all whilst maintaining context and coherence.

    • Image synthesis: Stable Diffusion can create high-quality, photorealistic images of faces, objects or scenes, which are entirely fictional but appear genuine.

    • Drug discovery: Generative AI models can accelerate the drug discovery process by designing novel molecules with desired properties, aiding in the development of new pharmaceutical treatments.

However, the growing imperative to regulate AI has spawned a multifaceted patchwork of approaches globally, complicating matters for businesses already grappling with new AI risks. Consequently, the realm of AI governance is still largely uncharted territory for corporations: boardrooms and C-suites have yet to formally consider and regulate AI ethics and values, leaving businesses vulnerable to reputational risks and potential regulatory penalties. Moreover, as generative AI technologies continue to evolve in new and unpredictable ways, many of the underlying assumptions made by these draft regulations and nascent corporate governance approaches are not always valid. For instance, traditional AI systems have been designed to perform specific tasks, which allows for the implementation of targeted, sector-specific regulations and governance. In contrast, users can leverage generative AI to produce text, images, speech and even music across a broad range of domains and use cases, making it difficult to establish a one-size-fits-all framework.

Organisations' exposure to risk is intensifying, underscoring the urgent need to prepare for future AI regulation and develop robust governance frameworks. So, how can companies navigate this increasingly dynamic technology and regulatory landscape and what steps can business leaders take to establish effective AI governance frameworks? What are the critical questions organisations should ask to brace themselves for the future of AI regulation and the ethical challenges it brings?

In this article, we will delve into the world of AI regulations and explore the challenges for organisations when creating effective AI governance frameworks. We will also provide key insights on the steps businesses need to take to ensure they are ready for tomorrow's regulation whilst accounting for the distinctive nature of generative AI.

Woman cycling on bike path at park on sunny day
(Chapter breaker)
1

Chapter 1

Navigate the regulatory maze

From ethical principles to tangible policies

As the adoption of AI accelerates, permeating products and services across both private and public sectors, legislators and regulatory bodies worldwide are working hard to keep pace. Countries have been quick to recognise AI as a catalyst for economic growth, but governments also acknowledge its potential impact on citizens, society and our broader environment, as well as the importance of adapting or augmenting existing regulatory frameworks to safeguard established rights.

In the wake of intense public discourse between 2016 and 2019, a global consensus has emerged among governments, businesses and NGOs on the core ethical principles guiding AI usage. The AI Principles of the Organisation for Economic Co-operation and Development (OECD), adopted by the G20 in 2019, exemplify this agreement.4 In an historic move, all 193 UNESCO Member States endorsed the first-ever global standard-setting instrument on AI ethics in November 2021.5 

Now, leading nations and international organisations are diligently translating these principles into actionable regulatory approaches. By early 2023, trailblazers in AI regulation, including the EU, US, UK, Canada, Japan, South Korea, Singapore and China, will have either proposed new legislation or published comprehensive guidelines to govern this transformative technology.

  • What are the OECD’s AI Principles?

    The OECD's AI Principles6 offer a strategic framework to guide businesses in the responsible development and deployment of artificial intelligence. The principles emphasise five key elements:

    • Inclusive growth, sustainable development and well-being: AI should be designed to promote economic growth, social welfare and environmental sustainability, ensuring that its benefits are widely accessible and contribute to societal improvement.

    • Human-centered values and fairness: AI systems should respect human values, rights and dignity, be developed with the goal of reducing discrimination and bias, foster inclusivity and diversity and ensure a fair distribution of benefits.

    • Transparency and explain-ability: Businesses should prioritise the transparency of AI systems, ensuring that stakeholders can understand and interpret the decision-making processes behind these technologies. This fosters trust and enables effective oversight and accountability.

    • Robustness, security and safety: AI systems should be designed to be robust, secure and safe throughout their lifecycle, with businesses striving to minimise potential risks and vulnerabilities and ensuring system resilience to errors or malicious actions.

    • Accountability: Businesses should implement mechanisms to ensure that they are accountable for the proper functioning of AI systems, including addressing any adverse effects and complying with relevant laws and regulations.

Stepping stone crossing on li river Yangshuo
(Chapter breaker)
2

Chapter 2

Striking the right balance

How can governments create regulatory objectives without stifling innovation?

Given AI's vast array of application areas and its potential impact on citizens and society, it's crucial to strike a balance between sector-agnostic baselines and sector-specific rulemaking to address different needs and contexts. The question is, what’s the right balance?

The pattern is decidedly more sector-agnostic in countries like the US, EU, Canada, Japan, Singapore and China, where policy initiatives establish overarching regulatory objectives, whilst additional sectoral work creates or amends regulations in areas such as medical devices, industrial machinery, public sector AI usage, agriculture, food safety, financial services and internet information services. For instance, the US's Blueprint for an AI Bill of Rights, the EU's AI Act and China's Ethical Norms for New Generation AI provide deep foundations for sector-agnostic policies.7,8,9

The primary mechanism for maximising cross-sector coherence within these proposals is the ‘risk-based approach’ to AI regulation. A leading example is the EU’s AI Act, which adjusts the degree of regulatory compliance required based on the classification of risk: whilst most AI poses little or no risk, high risk systems, such as those used in critical national infrastructure or in safety-related applications, will be subject to the strictest obligations.10

In contrast, the UK’s pro-innovation approach to AI regulation shifts the balance towards sector-based regulation with additional coordination from government to support regulators on issues requiring cross-cutting collaboration, such as monitoring and evaluation of the framework’s effectiveness, assessing risks across the economy and providing education and awareness to give clarity to businesses.11 The UK’s approach attempts to recognise that regulation is not always the most effective way to support responsible innovation but is, instead, aligned with and supplemented by a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. 

Challenges faced by businesses

In the face of the shifting regulatory landscape, businesses must confront several challenges as they integrate AI technologies into their operations:

  • Keeping up with technology changes. As generative AI technologies like GPT-4 continue to advance, businesses must question their underlying assumptions about existing AI risks, which are likely to have been based on discrete use cases and data.

  • Keeping up with regulatory changes. Businesses must stay informed and agile as they adapt to the ever-changing AI regulatory environment, which can be a daunting task given the speed at which new policies and guidelines are introduced. 

  • Allocating resources for compliance. Ensuring that organisations remain within the boundaries of various AI regulations can be resource-intensive, requiring businesses to allocate time, personnel, finances or independent reviewers to meet a diverse set of requirements.

  • Combining innovation with ethical considerations. Companies must recognise that ethical design drives growth and innovation because systems that adhere to ethical principles and regulations tend to be higher performing whilst also protecting customers and society.

  • Managing potential liabilities arising from generative AI use: As organisations further integrate AI into business operations, companies must navigate the potential legal liabilities and reputational risks that may arise from deploying these technologies.

  • Navigating different ethical regimes as well as cross-border legal and regulatory requirements. For businesses operating internationally, remaining sensitive to and complying with ‘softer’ cultural norms as well as myriad cross-border legal and regulatory requirements can be a complex and challenging undertaking.

Happy business people
(Chapter breaker)
3

Chapter 3

Turn principles and policies into trust

A principles-based framework can help organisations create common ethical standards.

In today's rapidly evolving technology landscape, creating trusted AI systems urgently requires organisations to implement a flexible, principles-based approach. Such a framework would offer a systematic way for businesses to ensure that their AI systems adhere to the common ethical standards and best practices demanded by governments, whilst providing clear actions for dealing with the tailored requirements of particular jurisdictions or sector-specific regulators. 

Seven steps for operationalising trusted AI:

  1. Establish a consistent ethical framework.
    Develop an ethical framework tailored to your organisation, drawing on existing principles established by the business, the OECD's AI Principles, or an independent reviewer as a foundation. This framework should provide clear guidance on ethical goals, considerations and boundaries within the context of the company and the industry sector in which it operates.

  2. Create a cross-functional team.
    Assemble a diverse, multi-disciplinary team with representation from various areas, such as domain experts, ethicists, data scientists, IT, legal, human resources, technology risk and compliance. This team will oversee the implementation of your ethical framework, allowing the business to align AI technologies, including generative AI, with pertinent values, such as inclusivity, transparency, robustness and accountability, ultimately fostering trust and driving positive planetary impact. 

  3. Build an inventory of current AI systems.
    The risk and internal audit functions in many organisations remain largely unaware of the scale at which AI systems are deployed across the enterprise. Creating a baseline inventory of data and a consistent framework for assessing the inherent risk of each AI use case and should guide the level of governance and control required to mitigate that risk and maximise value. Available guidance in this area is largely based on draft regulation which seeks to protect human beings and the environment and organisations must not forget to consider commercial risk.

  4. Develop clear AI auditing procedures.
    Create a set of guidelines that translate your ethical framework into practical, actionable steps for AI developers and engineers, as well as those who use AI to partially or fully automate their activities. These guidelines should encompass the entire AI lifecycle, from design to deployment, addressing data collection, model development, performance monitoring and third-party risks.

  5. Integrate ethics into AI development.
    Embed ethical considerations into every stage of the AI development process, ensuring that developers, engineers, product owners and users understand the legal and ethical considerations of AI they are building or buying and their responsibility to apply appropriate safeguards. This might include implementing ethical checkpoints or gate-based reviews at crucial development milestones and incorporating ethics-based metrics and KPIs to evaluate AI performance and impact on business outcomes.

  6. Build awareness and training.
    Ensure that everyone in the organisation, from business leaders to back-office professionals, are aware of AI and the ethical principles associated with its development and use; In our experience, although ethical frameworks are essential, they can sometimes fail to become properly embedded and operationalised when leadership is not fully appreciative of the risks.

  7. Monitor and continuously improve.
    Consider an independent, regular audit of AI systems to assess their ethical performance, addressing any shortcomings or adverse effects. Maintain a central inventory of AI systems, to support risk management and regulatory compliance. Additionally, gather feedback from stakeholders and users to refine the AI auditing guidelines, ensuring that the organisation’s ethical framework remains relevant and up to date.

  • Show article references#Hide article references

    1. “Chat GPT-4", Open AI, GPT-4 (openai.com), accessed 2 May 2023
    2. “EU lawmakers pass draft of AI Act, includes copyright rules for generative AI”, Venture Beat (venturebeat.com), accessed 2 May 2023
    3. “A pro-innovation approach to AI regulation”, GOV.UK (www.gov.uk), accessed 2 May 2023
    4. “The OECD Artificial Intelligence (AI) Principles”,  OECD.AI, (https://oecd.ai/en/ai-principles), accessed 2 May 2023 
    5. “Ethics of Artificial Intelligence”, UNESCO (www.unesco.org), accessed 2 May 2023
    6. “The OECD Artificial Intelligence (AI) Principles”,  OECD.AI,
    7. “What is the Blueprint for an AI Bill of Rights?”, OSTP, The White House (www.whitehouse.gov), accessed 2 May 2023
    8. “The Artificial Intelligence Act”, The AI Act (https://artificialintelligenceact.eu/), accessed 2 May 2023 
    9. “Ethical Norms for New Generation Artificial Intelligence Released”, Center for Security and Emerging Technology (georgetown.edu), accessed 2 May 2023
    10. “The Artificial Intelligence Act”, The AI Act (https://artificialintelligenceact.eu/), accessed 2 May 2023
    11. “A pro-innovation approach to AI regulation”, GOV.UK (www.gov.uk), accessed 2 May 2023

Summary

In the face of a patchwork of proposed regulations and the rise of generative AI, businesses face the daunting challenge of building trust in their AI-driven products and services. This requires a proactive approach to managing risk and a culture of responsibility. A principles-based framework for trusted AI offers a flexible solution to navigating the complexities of AI ethics and regulation. 

By doing so, organisations can demonstrate their commitment to transparency, accountability and fairness and drive AI-powered innovation that benefits stakeholders and shapes a more equitable future.

GPT-4 was used for help with wording, formatting, and styling throughout this work.

About this article

Authors
Harvey Lewis

Partner, Client Technology & Innovation, Ernst & Young LLP

Chief Data Scientist for Tax, designing and developing AI systems for tax and law professionals. Honorary Senior Visiting Fellow at the Bayes Business School, City, University of London.

Sofia Ihsan

EY Global Responsible AI UK&I Consulting Leader

Trusted advisor. Passionate problem solver for EY clients. Constant learner. Mother of three amazing children. Music lover.

Mira Pijselman

Senior Consultant, Technology Risk, Ernst & Young LLP

Digital ethicist, researcher, consultant. Unlocking technology’s value through responsible innovation. Dedicated to empowering the next generation of socio-technical talent.

Laura Henchoz

EY UK Client Technology Markets Leader; Director, Consulting, Ernst & Young LLP

Commercially minded product marketing leader. Driven by challenge. Fitness enthusiast. Torchbearer for the PyeongChang Olympics 2018.