EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
AI boosts business but presents challenges. A Responsible AI framework allows leaders to harness its transformative potential while mitigating risks.
Read more
The effectiveness of agents is shaped by five core factors that define how they operate, interact and evolve. Understanding these dimensions is essential for designing, deploying and governing AI responsibly, as each factor carries specific implications for safety, trust and performance.
- Autonomy captures the degree to which an agent can act and make decisions independently. Highly autonomous agents can sense their environment, evaluate options and act with minimal human intervention, driving efficiency and responsiveness at scale. However, this independence heightens the need for robust oversight, ethical safeguards and accountability frameworks. At the other end of the spectrum, low-autonomy agents are capable of executing basic, repetitive tasks but depend heavily on humans for guidance in novel or complex situations.
◦ High autonomy: Minimal human involvement, enabling scalability but requiring strong governance.
◦ Low autonomy: Limited independence, relying on frequent human direction.
- Adaptability reflects an agent’s ability to adjust its decision-making and behavior in response to changing circumstances. Adaptive agents can learn from historical data, user interactions and environmental cues, continuously refining their strategies to remain effective in dynamic contexts. This flexibility enables resilience but also introduces challenges in monitoring for unintended bias or drift. In contrast, rigid agents operate within a narrow, predefined rule set, making them predictable but potentially brittle when faced with unexpected scenarios.
◦ Adaptive: Continuously learns and adapts to new inputs and conditions.
◦ Rigid: Operates with fixed logic and limited capacity to evolve.
- Collaboration capability defines an agent’s ability to operate effectively within a multi-agent or human-machine ecosystem. Collaborative agents are designed to exchange information, negotiate and coordinate their actions, making them essential for complex workflows such as supply chain optimization, real-time trading systems or customer service orchestration. Independent agents, by contrast, function as standalone systems, suitable for contexts where interaction is unnecessary or adds complexity.
◦ Collaborative: Works seamlessly with humans or other agents to achieve shared goals.
◦ Independent: Operates in isolation with minimal or no need for coordination.
- Temporal stability refers to both the expected lifespan of an agent and its consistency over time. Persistent agents are designed to operate indefinitely, maintaining a stable identity and performance profile. This consistency is critical for trust in mission-critical applications such as health care, finance and defense. Transient agents, on the other hand, are temporary by design, created to fulfill a single objective or function for a limited duration. This makes them ideal for short-lived scenarios like troubleshooting events, ad-hoc data analysis or time-bound customer interactions.
◦ Persistent: Long-term operation with stable behavior, enabling reliability and predictability.
◦ Transient: Purpose-built for temporary, time-bound or goal-specific tasks.
- Reentrancy measures an agent’s ability to handle interruptions and resume progress seamlessly. Reentrant agents are designed for dynamic, high-stakes environments where interruptions, such as external alerts, user overrides or system events, are common. These agents can pause, retain context and resume operations without loss of functionality, which is vital in fields like logistics, emergency response and algorithmic trading. Non-reentrant agents, however, require uninterrupted execution; disruptions force a restart or failure, making them simpler but less flexible.
◦ Reentrant: Supports interruptions while maintaining task continuity.
◦ Non-reentrant: Cannot resume tasks if interrupted, requiring a full restart.
Together, these characteristics contribute to the creation of robust and reliable agents that can thrive in complex, dynamic environments — but the decentralized nature of multi-agent systems complicates traditional accountability frameworks, as responsibility is often distributed among multiple agents. Furthermore, the potential for conflicting objectives among agents can lead to ethical dilemmas that require careful navigation. Recognizing and addressing these challenges is crucial for ensuring multi-agent systems operate safely and responsibly in real-world environments.