EY released findings from the second phase of its Responsible AI (RAI) Pulse survey, which indicates companies that implement more advanced RAI measures are pulling ahead while others stall.
As broader adoption of AI technologies continues to accelerate, those furthest along experience the most benefit. Nearly four in five respondents said their company has improved innovation (81%) and efficiency and productivity gains (79%), while about half report boosts in revenue growth (54%), cost savings (48%), and employee satisfaction (56%).
Responsible AI adoption begins with defining and communicating principles and then advances to implementation and governance. The transition from principles to practice happens through RAI measures that embed commitments into operations. On average, organizations have already implemented seven of the 9 RAI measures, and among those yet to act, the vast majority plan to do so. Across all measures, fewer than 2% of respondents reported having no plans for implementation. This points to broad engagement with responsible AI and strong intent to continue progressing.
As organizations continue to advance on their RAI journey, the survey suggests greater adherence to RAI principles is correlated with positive business performance. For instance, those respondents with real-time monitoring are 34% more likely to see improvements in revenue growth and 65% more likely to see improved cost savings.
This survey is the second in a series, following initial findings in June, that evaluates how enterprises perceive and integrate responsible AI practices into their business models, decision-making processes and innovation strategies. The insights were gathered in August and September 2025 from 975 C-suite leaders across 11 roles and 21 countries.
Other key findings include:
Inadequate controls for AI risks lead to negative impacts
Almost all (99%) organizations surveyed reported financial losses from AI-related risks, with nearly two-thirds (64%) suffering losses of more than US$1 million. On average, the financial loss to companies that have experienced risks is conservatively estimated at US$4.4 million.
The most common AI risks are non-compliance with AI regulations (57%), negative impacts to sustainability goals (55%) and biased outputs (53%).
C-suite knowledge gaps in identifying appropriate controls
On average, when asked to identify the appropriate controls against five AI related risks, only 12% of C-suite respondents answered correctly. Chief risk officers, who are ultimately responsible for AI risks, performed slightly below average (11%). As agentic AI becomes more prevalent in the workplace and employees experiment with citizen development, the risks — and the need for appropriate controls — are only set to grow.
Citizen developers highlight governance and talent readiness gaps
Organizations face a growing challenge in managing “citizen developers” — employees independently developing or deploying AI agents. Two-thirds of surveyed companies allow this activity in some form, yet only 60% of these companies provide formal, organization-wide policies and frameworks to ensure these agents are deployed in line with responsible AI principles. Half also report they do not have a high level of visibility in employee use of AI agents.
Companies that actively encourage citizen development were more likely to report a need for talent models to evolve in preparation for a hybrid human-AI workforce. These organizations cite the scarcity of future talent as their top concern with newer AI models (31%, compared with 21% of others). These organizations also were more likely to have begun developing a strategy for managing a hybrid human-AI workforce (50%, compared to 26% of others) — revealing a wide gap in readiness.