High speed train silhouette in motion at sunset. Fast moving modern passenger train on railway platform.

Why responsible AI has become a growth strategy

Embedding risk, transparency and governance early lets teams move faster across volatile rules and markets.


In brief
  • Designing safeguards, lineage and policy-as-code reduces compliance friction while enabling rapid, auditable change.
  • Clear autonomy levels, human decision rights and rollback paths keep failures contained and protect trust at scale.
  • Federated oversight aligns central standards with local execution to balance speed, accountability and regional nuance.

EY Global Consulting Risk Markets Leader Scott McCowan and EY Americas Responsible AI Leader Sinclair Schuller recently sat down with MIT Professors Alex Pentland and Hossein Rahnama to address the critical importance of responsible AI and discuss the challenges created by the speed of AI development relative to the ability and willingness of society to assess and mitigate the associated risk.

As the world enters a period of significant disruption, driven by the rapid advancement of artificial intelligence (AI), organizations face the imperative of operating in an environment characterized by heightened and near-constant risk. This new dual paradigm necessitates an evolved model for approaching risk in the age of AI — one that positions responsible AI (RAI) not merely as an act of compliance but as a core risk strategy for growth. Norms, transparency, assurance and governance are the principles we must rely on in a game where actors are unpredictable, threats are nonlinear, and the speed at which we are moving forward with AI could be considered by some critics as “irresponsible.” Operationalizing responsibility will enable companies to operate safely and effectively, at scale and speed in uncharted waters, turning potential risks into competitive advantages.

Disruptive forces and the imperative for speed

In this rapidly evolving landscape, political volatility and geopolitical fragmentation are reshaping the global AI environment. Companies must navigate a complex web of compliance requirements while fostering trust in their AI systems. The concept of RAI is not universally defined; it varies significantly across different jurisdictions and political contexts. Governments invoke RAI, yet operational definitions diverge based on local values, tools and enforcement mechanisms. What constitutes responsible data controls or transparency obligations can differ dramatically between regions. Emerging economies are not merely passive recipients of these frameworks; they are actively shaping their own data sovereignty rules and trust frameworks, influencing global supply chains in the process. This fragmentation creates a scenario where no single entity holds the reins, leading to unpredictable outcomes as AI systems become more generalized and autonomous across domains and increasingly interconnected across products and markets.

 

Navigating this geopolitical compliance labyrinth is a daunting task. Organizations may find themselves needing to modify AI outputs for different jurisdictions, implement varying logging practices or even prohibit certain use cases altogether. The resulting “compliance debt” can hinder innovation, as teams scramble to patch policies into their code rather than adopting sustainable, policy-as-code approaches. The goal should not be to create a one-size-fits-all model but rather to manage a controlled portfolio of AI variants, each with traceable lineage to facilitate compliance across diverse regulatory landscapes. Lineage should link data sources, model versions, prompts, tools and runtime policies to jurisdictions and use cases so that teams can deploy changes swiftly and transparently as obligations evolve.

While navigating the regulatory differences and nuances is complex, Alex Pentland, Professor at MIT Media Lab, offers some practical thoughts: “Above all, AI must be transparent. We need audit trails; we need to know what AI is doing. Also, there must be accountability. When we know AI has caused harm, and we can prove it, there has to be consequences and liability for damages. These are areas where regulations can help.”

Corporate irresponsibility and market pressures

Amid these challenges, the competitive landscape of AI often prioritizes speed over safety. Growth metrics, such as daily active users and engagement time, can inadvertently incentivize companies to prioritize rapid feature deployment at the expense of RAI practices, which underlines Pentland’s thoughts on accountability. When organizations rush to ship features without adequate controls, RAI risks being rebranded as a “churn risk” rather than a risk mitigator. The friction that protects the enterprise is misinterpreted as a blocker to growth.

To counteract this trend, companies must reframe RAI as a strategy for growth and improved brand equity. By minimizing incidents and regulatory exposure, organizations can enhance their market position. Just as reliability engineering focuses on preventing failures, RAI should be viewed as a crucial component of maintaining user trust and brand integrity. In an era where public incident response can make or break a brand, organizations must prioritize transparency and accountability, showcasing their commitment to RAI through capability statements and post-incident learnings. This can include publishing learnings from major events, setting measurable commitments, and demonstrating progress through control effectiveness testing and remediation closure. Over time, this discipline reduces rework, accelerates delivery, and signals maturity to customers, partners and regulators.

Consumers demand a frictionless function

Consumer expectations further complicate the landscape, playing a significant role in shaping AI governance. Users demand both safety and frictionless experiences, often perceiving guardrails as obstacles unless they provide clear value. To address this, organizations should design user experiences that prioritize safe defaults while requiring explicit opt-in for riskier features. Messaging is crucial; constraints should be framed as reliability features rather than restrictions, empowering users with controls that enhance their experience. The benefits of guardrails should be described in language that users understand: “These controls prevent erroneous actions.” “These filters protect personal information.” “These approvals ensure correctness before changes are made.” Clear communication makes guardrails feel like quality features, not walls.

However, the rise of “irresponsible agents” complicates this landscape. Users may attempt to bypass safety measures, creating scenarios where AI systems operate outside of intended boundaries. Organizations must anticipate adversarial behavior and design systems with preventive measures, such as containment strategies. Containment includes implementing least privilege access by granting users and systems the minimum levels of access — or permissions — necessary to perform their functions. If preventive measures fail, systems require RAI-anchored detection agents that identify rogue irresponsible agents that don’t adhere to expected behavior. This approach helps limit potential damage from unauthorized actions. By fostering community norms and providing clear guidelines, companies can mitigate misuse and promote RAI practices. Responsible AI is not only a system property; it is also a social compact with users, shaped through norms, education and transparent expectations.

Technical irresponsibility: complex systems interacting

The interconnected nature of AI systems brings unique risks, especially at the points where different components meet. Problems often arise from these interactions, such as when a language model’s output causes issues in another system, leading to unexpected results. As AI models interact with tools, APIs and databases, even well-functioning components can produce harmful outcomes when combined.

To manage these risks, organizations should prioritize integration and safety. This includes using sandbox environments and circuit breakers to stop unusual behaviors and verifying that data exchanged between components is well structured and clear. Time-outs and rate limits can prevent processes from spiraling out of control. Testing new changes in controlled environments helps identify issues before full deployment.

With multiple teams responsible for different aspects of AI systems, accountability can become diluted. MIT’s AI Risk Repository1 offers some practical approaches to address AI governance, including a risk classification framework and tools for professionals to evaluate risk and set policies. “Risk management leaders need a well-thought-out, holistic taxonomy to categorize and address the complexity of risks presented by AI,” according to Pentland. “A comprehensive approach must look at how, when and why risks occur.” Establishing end-to-end ownership for high-risk workflows is essential so that safety objectives are met and cross-functional review gates are in place to provide real oversight. This includes maintaining evidence of control, setting clear release criteria and being accountable for issues that affect the entire system.

Are we risk traditionalists or risk strategists?

The fragmented regulatory environment and slow progress in AI governance have created significant market uncertainty. With fewer enforcement actions, boards of directors, CEOs and risk leaders must navigate major changes in corporate operations independently. Market unpredictability, along with geopolitical events such as conflicts and protectionism, is causing a shift from past norms and rates of change. The Global Economic Policy Uncertainty Index2 has shown significant spikes, indicating increased concern about policy and economic impacts. Meanwhile, the US stock market is nearing all-time highs. Industry reports show that spending on data center starts has risen recently, fueling a race for computing power and electricity.3 This mix of heightened uncertainty and growing investment in AI infrastructure requires organizations to rethink risk management as a strategic choice.

 

For risk traditionalists, it’s all hands on deck. This is a time to focus on reporting, compliance and instituting guardrails. However, what if there are alternative ways to address the escalating risks presented by an unpredictable world? As we enter this period of increased volatility, skills around foresight and agility will be most valued, as a perfect storm of volatility and the disruption of AI itself make risk management as much a game of future strategy as one of validating the present state.

 

In a recent EY study on global risk transformation, two archetypes of risk management professionals emerged: the risk traditionalist and the risk strategist. Risk strategists were rated more likely to reduce unexpected risk than their risk traditionalist counterparts. Risk strategists are not only more aware of the changing global context, but they also lead in the adoption of both foundational and more advanced risk management techniques. The time for change is now. According to the study, only 14% of firms have completely changed their risk management approach to adapt to the current, increasingly volatile, interconnected and accelerated risk climate. This new approach and mindset will be critical in building the foundations of RAI.

Strategists do not abandon compliance; they integrate it with delivery by confirming that all processes meet regulatory standards. They use red teaming to simulate attacks and identify security vulnerabilities, such as weaknesses in systems or processes that could be exploited by adversaries. Autonomy gating is applied to control the level of independence granted to AI systems so that they operate safely within set boundaries. Transparency is maintained to facilitate clear communication and understanding of AI operations and decisions. This integration allows speed and safety to reinforce one another so that organizations can innovate quickly while maintaining robust security and compliance.

What is responsible AI?

The concept of RAI emerges as a critical framework for organizations navigating the complexities of technological advancement and risk management. Since the early days of GPT models, it has become clear that bias, poor data quality, ethical risks and model hallucinations are major obstacles in building and operating large language models responsibly. These challenges are not purely model-specific; they intersect with data governance, product design, user experience, incentives and regulatory obligations.

More than a simple compliance checklist, RAI needs to be a strategic approach and guiding principle that align with an organization’s overarching objectives. In contrast, traditional AI risk governance focuses on adhering to existing regulations and managing risks associated with AI deployment. Governance plays a pivotal role in the responsible development and deployment of AI technologies, establishing frameworks that guide organizations in adhering to ethical standards and regulatory requirements, and it must evolve beyond mere compliance to become a competitive advantage.

Implementing RAI requires a nuanced understanding of the spectrum of risks associated with AI, as it can drive significant efficiencies and innovation while also posing existential threats, such as workforce displacement and ethical dilemmas. In this context, the concept of “humans in the loop” is crucial, emphasizing the importance of human oversight in AI decision-making processes to navigate these challenges responsibly. Decision rights should be explicitly defined — when humans must approve, when they can override and when they must be informed — and escalation paths and thresholds for human intervention documented. The aim is not to slow systems unnecessarily but to confirm that consequential actions are controllable, auditable and reversible.

A new way forward with responsibility by design

The responsibility-by-design approach centers on integrating risk considerations into the initial stages of design so that potential issues are addressed proactively rather than reactively. By prioritizing the minimization of risk from the outset, rather than focusing solely on mitigation after issues arise, organizations can operate more efficiently and effectively at scale.

Responsible-by-design principles embed essential guardrails directly into the AI stack and operational practices. These guardrails are designed to be preventive, identifying and addressing risks before they manifest; detective, monitoring systems to catch emerging issues early; and corrective, enabling swift responses to any problems that do occur. This comprehensive integration establishes AI systems that are robust, secure and aligned with ethical standards from the ground up, facilitating innovation without compromising safety or compliance.

A proactive RAI strategy must incorporate responsible-by-design principles throughout the development lifecycle. This includes embedding guardrails directly into the AI stack. Implement identity and access controls scoped to specific tasks to prevent misuse; least privilege should be the default. Organizations should apply data minimization practices at the point of data ingress to reduce privacy risks and contamination. Policy as code should be employed at inference so that ethical and legal considerations are integrated into the AI’s operational framework, constraining tool calls, filtering protected information, blocking disallowed actions and requiring human authorization for specific steps. Version policies should be tested, and policy changes audited.

Post-output validation is another critical component of responsible design. By validating AI outputs before acting, organizations can mitigate harmful consequences and confirm that decisions made by AI systems align with ethical standards. Where confidence is low or risk is high, outputs should be routed to human review. Versioning all components, data, models, prompts and tools, along with linking them with lineage, allows organizations to track changes and understand their impacts. This transparency is vital for auditing and compliance, enabling organizations to answer critical questions about what changed and why. Organizations should link changes to tickets and reasons and maintain rollback paths to the last known good version when anomalies spike.

Move from compliance to resilience

Organizations must design their systems with the possibility of failure in mind and not just aim for success. This means anticipating potential issues and having strategies ready to manage them effectively. For AI-powered chatbots in customer service, it’s crucial to verify that if the chatbot encounters a complex query it can’t handle, it seamlessly transfers the conversation to a human agent. This approach, known as graceful degradation, provides users with continued support even when AI struggles. By measuring handoff latency, deflection rate and customer satisfaction, organizations can gather insights to refine their models and improve service flows.

Another important strategy in AI applications is the use of feature flags. Feature flags allow companies to test new algorithms by selectively enabling them for a small percentage of users. For instance, a company developing an AI-driven recommendation system for an e-commerce platform can use feature flags to gradually introduce a new recommendation engine. This approach enables the company to monitor performance, gather user feedback and make data-driven decisions before fully deploying the system to all users. This minimizes the risk of negatively impacting sales if the new system underperforms.

In addition to feature flags, maintaining kill switches is crucial. Kill switches provide the ability to instantly disable features if anomalies or unexpected behaviors occur. Any negative impact on user experience or business operations can then be swiftly mitigated while maintaining system stability and customer trust.

Implementing advanced strategies

Shadow deployments are particularly useful in AI model development. For instance, a financial institution implementing a new AI model for fraud detection can run the new model in shadow mode alongside the existing one. This allows the organization to compare the performance of both models in real time without affecting actual transactions. If the new model identifies fraudulent activities more accurately, it can be deployed confidently; if not, the existing model continues to operate without disruption. Comparative evaluation with precision, recall, false positive costs and operational impact reduces the likelihood of negative surprises.

To enhance safety further, companies can implement anomaly detection mechanisms in their AI systems. In a healthcare setting, an AI system that analyzes patient data for early signs of disease can be equipped with anomaly detection. If the system suddenly flags an unusually high number of healthy patients as at risk, the anomaly detection feature can alert medical staff to investigate the issue before any erroneous conclusions are drawn.

When problems are detected, organizations can employ automated containment strategies. In the case of an AI-driven content moderation tool for social media, if the system starts incorrectly flagging a significant number of legitimate posts as inappropriate, the company can quickly implement rate limiting to reduce the number of posts processed until the issue is resolved. Alternatively, it might disable the moderation tool temporarily while investigating the root cause of the errors. In more severe cases, the company can revert to a previous version of the moderation model that was functioning correctly so that users can continue to share content without unnecessary restrictions.

Cross-disciplinary risk management

To effectively manage AI risks, organizations should treat AI as any other important enterprise risk, employing a “three lines of defense” model. In this framework, product teams are responsible for implementing controls, risk and compliance teams design these controls, and internal audit functions validate their effectiveness. Drawing inspiration from cybersecurity practices and the Sarbanes-Oxley Act (SOX), organizations can establish control catalogs, assign control owners and conduct periodic effectiveness testing to verify that RAI practices are upheld.

Preparation for potential crises is equally important. Organizations should develop incident playbooks, conduct tabletop exercises and provide spokesperson training to promote readiness for adverse events. Responsible AI is as much about crisis management as it is about model tuning; having pre-authorized remediation paths can significantly enhance an organization’s ability to respond effectively when issues arise. Organizations should consider publishing learnings and commitments after significant events, as well as refining controls based on discoveries, not just intentions.

Federated governance model

Implementing a federated governance model can help organizations balance consistency with the need for speed and domain-specific nuances. In this model, a central AI governance function sets standards, tooling and assurance processes, while individual business units appoint “AI stewards” responsible for local implementation. This structure promotes accountability and provides RAI practices that are tailored to the unique needs of different departments. The center equips, the business executes, and both are accountable for outcomes.

Additionally, organizations should build a regulatory radar to map obligations to features and data flows. By monitoring upcoming regulatory changes and encoding them into policy as code, companies can deploy compliance shifts with the same agility as software updates. Maintaining a change log tied to releases allows teams to demonstrate when and how obligations were adopted. This proactive approach enables organizations to stay ahead of regulatory requirements and maintain trust with stakeholders. Using consistent reporting templates provides leadership the opportunity to compare control effectiveness, incident trends, remediation outcomes and autonomy decisions across portfolios.

Risk-managed autonomy

A key aspect of a proactive RAI strategy is defining autonomy levels based on risk rather than hype. Organizations should categorize AI systems into the following four levels:

  1. Advisory, where AI provides suggestions
  2. Assistive, involving human oversight
  3. Constrained execution, with preapproved actions
  4. Delegated autonomy, where AI operates independently within safeguards

For each level, it’s essential to establish controls such as approvals, logging, monitoring, rollback mechanisms and human override capabilities. Clear decision rights reduce confusion during incidents, and tracking approvals, overrides, deviations and rollbacks helps assess readiness to advance to higher autonomy levels.

Progression from lower to higher autonomy should be gated by evidence. Organizations must establish quantitative safety thresholds, conduct red team benchmarks and execute successful pilots in controlled environments before granting greater autonomy to AI systems. Autonomy should be promoted only when thresholds are met and demoted when risk or performance drifts. This evidence-based approach allows organizations to confidently scale their AI capabilities while maintaining safety and accountability. In customer-facing contexts, satisfaction and resolution time should be monitored alongside incident trends, but one should not be optimized at the expense of the other.

Conclusion

As organizations integrate AI into their operations, a proactive approach to RAI is essential for fostering innovation and mitigating risks. In a landscape marked by rapid technological change and evolving consumer expectations, companies must prioritize resilience, accountability and ethics. Responsible AI should be viewed not just as compliance but also as a core strategy for growth and brand protection. By embedding responsible-by-design principles, balancing governance with agility, defining autonomy levels based on risk and enhancing cross-disciplinary risk management, companies can transform potential risks into competitive advantages.

Operationalizing responsibility allows organizations to scale safely and swiftly, with norms, transparency and governance serving as guides in unpredictable environments. This approach encourages speed and safety to reinforce each other. By integrating risk discussions into the design phase, organizations can treat RAI as a strategic model to derisk the enterprise, enabling growth without compromising core principles. Commitment to responsible AI will be crucial for building trust and achieving sustainable success in a complex world.


Summary

Rapid AI adoption is reshaping risk, regulation and competition. Organizations must manage fragmented rules, rising consumer expectations and complex system interactions without slowing innovation. Treating AI risk as a strategic discipline enables speed and safety to reinforce one another through proactive design, clear accountability and resilient operations. By embedding controls across data, models and workflows; defining evidence-based autonomy; and preparing for failure as well as success, companies can reduce incidents, strengthen trust and operate confidently amid uncertainty — turning volatility into a durable advantage.

About this article

Authors

Related articles

Building the business case for responsible AI: 10 steps to success

Responsible AI is a business imperative for organizations to capitalize on billion-dollar opportunities safely. Learn more.

How can responsible AI bridge the gap between investment and impact?

Explore the ways in which responsible AI converts investment into meaningful impact.

How can reimagining risk prepare you for an unpredictable world?

The 2025 EY Global Risk Transformation Study explores how Risk Strategists see disruption earlier, adapt faster and respond with more precision.