people walking in front of colorful wall

Establishing practical AI governance for compliance and legal

Compliance and legal teams are deploying governance frameworks that balance AI innovation with safety, reliability and legal standards.


In brief

  • Artificial intelligence (AI) technologies are evolving at such a pace, they can bring operational, legal and reputational risks if not effectively monitored.
  • An AI governance framework provides a practical approach to identify, assess and mitigate issues, and promote the development of trustworthy AI systems.

As businesses integrate AI technologies into their operations, compliance and legal executives are examining governance and processes that allow organizations to innovate with AI responsibly.

Given the pace of advancement and legislation (more than 1,000 pieces of US legislation have been introduced impacting AI governance in the 2025 legislative session), establishing AI governance is no easy feat for compliance professionals, for whom AI is one of just several emergent areas of focus.

As AI continues to evolve - GenAI, agentic AI and now neurosymbolic AI - the most effective AI governance programs are those that are robust, efficient and built to respond quickly.

Existing AI governance frameworks help address data privacy, business risks, legal standards and compliance obligations associated with AI technologies. For example, the National Institute of Standards and Technology (NIST) has developed a structured approach that has been cited by regulatory bodies, including the U.S. Department of Justice. However, to create an effective program, leaders must identify and evaluate the risks associated with their own AI technologies.

That starts with understanding the landscape of AI applications within the organization, so that compliance executives can work with the business to better identify potential risks and areas for improvement. The inventory should identify the technologies, assess the associated risks and consider the technology’s alignment with the organization’s responsible AI principles. This helps an organization answer common questions: Does the company allow third parties to design and maintain AI-enabled apps, or does it want to keep all use of AI systems internal? If a model is making a decision, do you have the appropriate human validating those decisions? How do you know that the model continues to perform as expected? What is the process for modifying the system if regulatory or other stakeholder requirements evolve?

Establishing AI governance starts with understanding the landscape of AI applications within the organization, so that compliance executives can work with the business to better identify potential risks and areas for improvement. 

One of the most pressing issues is limited technical expertise within teams. Executives often find that they need to consult with technical teams to understand complex topics, such as model drift. Including business, legal, compliance and technical teams in the governance process helps to develop a comprehensive understanding of risks and mitigation strategies.

As AI becomes more integrated into daily operations, employees will expect to leverage AI tools, but they also seek guidance. Organizations are encouraged to provide safe and compliant tools, along with clear communication about AI compliance requirements.

These discussions will evolve as organizations adjust to new technologies. “AI governance is not a one-size-fits-all checklist — it’s a living framework that must evolve with the technology, the risks and the people who use it,” says John McLain, EY Americas Assurance Technology Risk AI Leader. “Organizations that strike the right balance between structure and flexibility will be best positioned to innovate responsibly and sustainably.”

Regulators have been clear that crimes committed with the use of AI are still illegal. While blaming AI is not a valid defense, having strong AI governance can help.

Seven dos and don’ts of AI governance

 

1. Do maintain and update a comprehensive AI inventory of how each technology is being used within your organization to help identify potential risks associated with each technology. This inventory should align with the organization’s responsible AI principles.

 

2. Don’t get sidetracked by buzzwords or nuances before tackling the basics. Organizations should establish their AI governance process proactively and not retrospectively.

 

3. Do tailor your governance framework to your risk assessment and the specific AI technologies in use. AI governance strategies need not be perfect upon inception — they should grow and evolve.

 

4. Don’t delay the conversation. While it can be overwhelming to develop standards amid rapid change with competing priorities, AI governance is needed now. Organizations that have not begun considering AI governance strategies find that it is quickly catching up with them.

 

5. Do communicate clearly with employees. Provide them with the necessary resources and training so they understand and adhere to governance policies.

 

6. Don’t adopt overly strict governance. It may stifle creativity and lead an organization to fall behind its competitors. Corporate employees are beginning to expect the support and use of AI in their roles. Organizations that do not adopt logical or common-sense AI governance, guidance and tools may find employees bypassing controls to develop their own methods.

 

7. Do continuously monitor AI systems to confirm they are functioning correctly and adhering to compliance standards. Regularly update governance frameworks to reflect changes in technology, regulations and organizational priorities. Take corrective action when necessary and be sure to include decommissioning and incident monitoring in your program.

Mark Beluk, a manager in the Forensic & Integrity Services practice, Ernst & Young LLP, also contributed to this piece.

Summary

As businesses adopt AI technologies, compliance and legal executives recognize the role of AI governance in promoting the development of reliable, safe and secure AI systems. By understanding the intricacies of AI governance, addressing common challenges and communicating the organization’s practices, executives can navigate the complexities of AI compliance and demonstrate a commitment to responsible AI practices.

About this article

Authors

Related articles

How AI assessments can enhance confidence in AI

Well-designed AI assessments can help evaluate whether investments in the technology meet business and society’s expectations. Read more.

How responsible AI can unlock your competitive edge

Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.