Six steps to enhance agentic ai governance

Part 2

Six steps to enhance governance and increase agentic AI’s value

Learn how these six steps can help your organization enhance governance, manage agentic AI risks and improve oversight.


In brief

  • Adopting agentic AI responsibly requires organizations to refresh and strengthen their governance frameworks.
  • Existing systems may not be equipped to keep pace with agentic AI’s innovative capabilities.
  • Realigning governance to match AI systems’ agenticness and risk profiles can help support responsible adoption.

As the pace of AI innovation accelerates, organizations are increasingly adopting and integrating agentic AI technologies. That said, agentic AI’s unique capabilities and rapid evolution mean existing governance practices may need a strategic refresh. Organizations will require continuous governance updates and tailored oversight to promote responsible AI practices, strengthen governance and continue deriving value in the agentic age.

What is agentic AI and why does it require enhanced controls and stronger governance?

Agentic AI refers to advanced AI systems that can make decisions independently and take action in complex environments. Rather than relying on human input, agentic AI systems learn and adapt in real time. We classify AI systems as agentic primarily based on two key criteria: 

  • Goal complexity: agentic AI manages and prioritizes multiple, intricate objectives simultaneously
  • Independent execution: agentic AI carries out tasks and makes decisions without human intervention

From there, an AI system’s degree of agenticness is rated based on four secondary characteristics: 

  • Generality
  • Adaptability
  • Environmental complexity
  • Impact

While all AI systems share common risks, the advanced capabilities and self-learning nature of agentic AI introduce a new array of challenges that demand robust governance — specifically, automated, continuous, precise and transactional oversight — to effectively manage these systems. 

Agentic AI systems lie across a spectrum of complexity and risk and, consequently, governance responses cannot be “one size fits all.” Governance responses to agentic AI systems need to be commensurate with each system’s risk level, from low-touch oversight to close monitoring. 

By understanding the levels of risk and potential impact of their agentic AI systems, companies can implement more efficient and streamlined control activities, reserving costly monitoring applications for high-risk scenarios. This targeted approach not only optimizes resource allocation and manages costs, it also enhances return on investment (ROI) and return on risk (RoR), focusing governance efforts where they matter most. 

Align governance resources and investments to match risk profile

We recommend embracing six leading practices to begin evolving governance frameworks in line with emerging systems like agentic AI and the new risks they represent.

Unlocking the potential of agentic AI: definitions, risks and guardrails

Learn how agentic AI differs from other systems and why governance is key to unlocking its business potential responsibly.

1: Assess your organizational responsible AI (RAI) framework, including AI definition, policy and principles

Develop a detailed RAI framework that includes clear AI definitions to guide governance; a policy outlining ethical guidelines for AI development and deployment; and principles that promote accountability and support sustainable business practices. 

Make it agentic-AI specific: Your organization’s AI definition should include consideration for agentic AI systems. Many organizations provide a range of development environments, including no- and low-code options, to accommodate different user needs. That’s why it’s so important for the AI definition within governance frameworks to clearly specify what is — and what is not — considered agentic AI. This helps maintain clarity and consistency across different deployment scenarios. 

You should also include clear guidelines around adopting, developing and using agentic AI in the organization in your RAI policy. Define and acknowledge the difference between citizen development of no- and low-code AI agents versus professional development of a full agentic AI system with business context-based examples.

2: Clarify accountability mechanisms and procedures 

Put well-defined accountability practices in place for obligations and oversight over the actions of AI systems.

Make it agentic-AI specific: Given agentic AI’s capabilities for independent execution, it’s essential to set clear boundaries on who will be responsible and accountable for actions the systems take. Specify unique ownership requirements for agentic AI in your RAI policy and framework, embedding accountability for agentic-AI driven outcomes. For example:

Scenario 1: Document assistant agent (low to medium risk)

Scenario 2: Reconciliation agentic system (high risk)

  • Reviews vendor contracts, evaluates compliance and performs dynamic rewrites and language recommendations.
  • Built by citizen developer in no- or low-code AI development platform (e.g. Copilot Studio, ToolJet or equivalent).
  • Low to medium independent execution, low goal complexity and low impact.
  • Multi-agent system to reconcile purchase orders to invoices received, approve payments and reroute unmatched invoices.
  • Pro-developed in a professional technology stack.
  • High independent execution, high goal complexity and medium impact.

Accountability

Developer and user of the agent

System owner and/or business process owner

Scope of responsibility

The agent only processes appropriate data or documents; there is always human review of the agent’s output.

Agentic AI system development, testing, validation and observation follows the organization’s standard procedures and relevant RAI principles.

3: Develop an AI risk assessment tool

Create an AI risk assessment tool to support the appropriate management of AI systems in the organization. The tool should assess both the organizational risks and impacts associated with a given AI system, as well as any relevant regulatory risks, for example, whether the system would be classified under the EU AI Act as a minimal, limited, high or prohibited-risk system — based on the system’s intended use.

Make it agentic-AI specific: The AI risk assessment tool should be built to both identify and manage the unique risks associated with agentic AI systems. This may include assessing the technology’s advanced organizational impacts and any relevant new risk categories to agentic AI, such as its execution alignment with human and organizational goals. For both singular and multi-agent systems, the entire agentic AI system should be risk assessed and will have to adhere to compliance obligations based on use-case risk classification. For example:

Scenario 1: Document assistant agent

Scenario 2: Reconciliation agentic AI system

Accountability

Developer self declaration
System owner and/or business process owner through organizational risk assessment tool

Assessment criteria

The agent does not perform prohibited or high-risk activities as defined by the organizational AI governance policy, which should align with applicable compliance obligations.

Determine the level of agenticness and impact of the AI system, which, when combined with the other sections of the risk assessment tool, determines applicable risks, including agentic AI-specific risk categories and risk treatment strategies.

4: Carry out an AI inventory 

Establish a repository or inventory of all AI models and systems developed, as well as any accompanying documentation requirements in keeping with organizational policies, contractual requirements, legislation and/or regulatory requirements, including, for example, technical model documentation, summary of training data, cybersecurity, accuracy and robustness measures.

Make it agentic-AI specific: Make sure your AI inventory includes documentation of all agentic AI systems, including clear categorization of which systems are agentic according to the six agentic AI capabilities criteria. Also consider mandating documentation of considerations unique to agentic AI. This includes documentation related to decision pathways, tool usage logic, agent-to-agent communication and external tool access protocols. For example:

Scenario 1: Document assistant agent

Scenario 2: Reconciliation agentic AI system

Accountability

Developer of the agent
System owner and/or business process owner

Inventory requirement

  • To be inventoried in a governed AI development platform
  • Subject to the AI development platform-level governance and oversight requirements
  • To be inventoried within the organizational AI system inventory with appropriate system/model information including agentic capabilities
  • Subject to the organizational governance and inventory oversight requirements

5: Implement a risk and control matrix with key performance indicators (KPIs) for tracking and monitoring

Establish a detailed risk and control matrix and KPIs to manage the risks associated with AI systems throughout the AI system lifecycle.

Make it agentic-AI specific: The risk and control matrix should include enhanced agentic AI preventative and detective controls in the space of AI system testing and validation, data filters and security controls, and a systematic observability program. This includes the inclusion of controls specifically designed to address the agentic AI system’s risks (e.g., adversarial robustness testing, reward hacking analyses and decision-making process tracing). For example:

Scenario 1: Document assistant agent

Scenario 2: Reconciliation agentic AI system

Accountability

Developer of the agent
System owner and/or business process owner

Control requirement

Subject to the AI development platform-level onboarding, data guardrails and monitoring requirements as defined by the organization. Such requirements should consider the level of tools and data an individual agent has access to and whether access to the agent itself should be restricted (e.g. to only legal and procurement teams).
Subject to the alignment with AI RCM and end-to-end observation based on predefined KPIs to ensure the agentic AI system operates as expected.

6: Deploy a third-party risk management program 

Have a third-party AI risk management program in place, including but not limited to vendor assessments, contract clauses and open-source security reviews.

Make it agentic-AI specific: Accountability for agentic AI must be clearly built into vendor contracts, terms and conditions. This involves ensuring well-defined responsibilities for agentic AI monitoring between parties. Put explicit monitoring procedures and agreements in place to ensure appropriate oversight and assignment of accountability to all third-party agentic AI use is conducted. 

Open-source software procedures should also be updated for agentic AI considerations. For example, organizations should provide guidance around agent API calling, which introduces risks around data protection, access and reliability, and consent for use. Clear boundaries of agent operating domains must thus be specified. For example:

Scenario 1: Document assistant agent

Scenario 2: Reconciliation agentic AI system

Accountability

Developer of the agent
System owner and/or business process owner

Third-party risk management considerations

Consideration that any use of third-party tools or data sources the agent uses should follow relevant organization policies.

Full review based on the AI supply chain roles and responsibilities,

Connect agentic-AI specific governance with existing systems

Taking a broad-based approach to governance that complements your existing organizational technology processes is key. While these six core steps establish the foundation for strong agentic AI governance, you’ll also need to consider and integrate areas like data protection, information security, risk management and more as part of the ecosystem. For example, AI agents must be built and operated in compliance with applicable data protection laws and organizational privacy impact assessment processes. 

 

Moreover, agentic AI systems looking to use foundational models — for example, through API calling — must receive explicit permission prior to use. Agent-to-agent communication must also be authenticated and adhere to clear data access guidelines. 

 

On the data governance side, agentic AI systems will need to follow organizational data governance policies for managing the collection, storage, processing and disposal of data used by AI agents. 

 

The same can be said for information security measures. Your organization will also need to follow its information security processes to protect the agent ecosystem, including its orchestration layer, protocols, agent identity management and all new attack surfaces. Thinking about agentic AI implications in the context of your broader technology processes helps the enhancements you make on this front align with how the organization works and support overall effectiveness. 

Summary

Governance structures are more practical and stronger when they reflect the breadth and depth of potential risks. Building detailed governance strategies with a degree of agility in mind can help promote responsible AI adoption, unleashing agentic AI skillsets and potential to create the long-term value stakeholders are looking for.

What’s next?

In part one and two of this series, we’ve looked at how agentic AI differs from other systems and why governance frameworks must adapt to responsibly unlock its business potential. Next up, we’ll dig into determining a system’s level of agenticness, applicable risks and mitigation strategies.

Related content

Unlocking the potential of agentic AI: definitions, risks and guardrails

Explore agentic AI’s potential, how it differs from traditional systems, and why strong governance is key to managing its unique risks and capabilities.

The path forward: governing AI with insight and integrity

Learn how responsible AI governance helps Canadian boards build trust and gain a competitive edge. Discover key boardroom strategies today.

How responsible AI can unlock your competitive edge

Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.

    About this article

    Authors