- Only a third of companies have responsible controls for current AI models despite nearly three-quarters having AI integrated into initiatives across the organization
- C-suite executives are on average half as worried as consumers about adherence to responsible AI principles
- CEOs show greater concern about AI risks than other C-suite leaders do
EY released findings from its Responsible AI Pulse survey, revealing a substantial gap between how confident C-suite executives feel about their artificial intelligence (AI) systems and the current level of governance controls in place.
Seventy-two percent of executives surveyed say their organizations have “integrated and scaled AI” in most or all initiatives and nearly all (99%) are at least in the process of doing so, yet only a third of companies have in place the proper protocols to adhere to all the facets of the EY Responsible AI framework.
The research is the first of a series aiming to evaluate how enterprises perceive and integrate responsible AI practices into their business models, decision-making processes and innovation strategies. The insights were gathered in March and April 2025 from 975 C-suite leaders across 21 countries.
The findings also revealed a substantial difference in perceptions between business leaders and the general population. Nearly two in three C-suite executives (63%) think they are well aligned with consumers on their perceptions and use of AI. However, this contrasts starkly with the findings of the recent EY AI Sentiment Index Study, which found consumers were on average more than twice as worried as the executives surveyed in the Pulse research across a range of AI-related concerns.
This includes concerns around the degree to which organizations fail to hold themselves accountable for negative AI use (58% consumers vs. 23% executives) as well as organizations not complying with AI policies and regulations (52% consumers vs. 23% executives). While there is general agreement in some areas, such as the value in using AI to automate routine tasks (63% consumers vs. 57% executives) and to simplify tasks that need technical or academic training (67% consumers vs. 59% executives), the clear differences in many key areas leave a critical void for leaders to address.
While on an individual basis, most firms have responsible AI principles in place, on average, organizations only have strong controls in three out of nine facets, which includes accountability, compliance and security.
Other key findings include:
CEOs bridge the divide between C-suite and consumer attitudes
Among the C-suite, CEOs demonstrate broader skepticism and caution and are consistently the least likely to claim their organizations have strong controls in place around AI. Only two in 10 (18%) CEOs state their organizations have strong controls for AI fairness and bias compared to the broader C-suite average of 33%. In addition, just 14% of CEOs believe their AI systems operate in adherence to regulations compared to 29% of their C-suite peers.
Critically, CEOs align most closely with consumer perception and sentiment around AI, compared to their C-suite counterparts. The average concern regarding responsible AI principles among CEOs sits at 38%, below the average consumer level of 53%, but far ahead of other boardroom roles which range from 23% to 28%.
Governance is lagging behind innovation
While executives are fully invested in the technology’s potential, about half admit it is challenging to develop governance frameworks for current AI technologies and that their frameworks aren’t ready for the next generation of AI technologies. However, there is at least recognition of this and 50% of organizations are making significant to extensive investments in developing governance frameworks for the risks and challenges these emerging AI technologies present.
Future AI adoption plans outpace risk awareness
Nearly all C-suite executives expect to use emerging AI technologies within the next year. While three-quarters (76%) of surveyed companies are currently using or planning to use agentic AI in the next year, only 56% are familiar with the associated risks. Even bigger gaps exist across other emerging areas, including synthetic data generation where 88% are utilizing the technology while only 55% are aware of risks.