1. Transformative gains take transformative thinking
Leading companies are moving beyond employee-level use cases and focusing on process reinvention and new business models. Most companies have focused their early AI investments on increasing employee efficiency and productivity. Those efforts are paying off. According to the EY December 2025 AI Pulse Survey, 71% of senior leaders at organizations currently investing $10 million or more in AI say their organization has seen “significant” AI-driven productivity gains over the past year. What’s more, executives are plowing those gains right back into more AI tools. Nearly half (47%) of senior leaders at organizations where AI has improved productivity are reinvesting in their AI capabilities.
With productivity becoming table stakes, what can companies do as the race for value intensifies? We believe that as productivity improvements reach their limit, differentiation will come from fundamentally redesigning processes, not just tweaking them, to optimize them for AI. Our most sophisticated clients go deeper in key areas and rebuild processes from the ground up. In fact, EY leaders’ experience suggests that companies that focus on AI-first process reinvention can improve efficiency by more than 90%. Agentic AI — which autonomously manages complex, multi-step tasks without human intervention — is set to supercharge this opportunity, giving companies new ways to manage end-to-end processes.
For example, take customer churn prevention. Traditionally, the process of analyzing and addressing customer loss involves multiple teams, including data engineers, data scientists, marketing analysts and executives moving sequentially through various steps: data preparation, churn modeling, trend analysis, strategy design and campaign execution. Each handoff adds delays and limits agility. In contrast, with agentic AI, the whole process could be performed by autonomous agents working simultaneously in real time. The result is a continuous, self-improving cycle that eliminates handoffs, accelerates decision-making and enables hyper-personalized interventions at scale.1
With some predicting that AI agents could equal human performance in some areas by mid-2026,2 the opportunities for growth will only increase. Some companies are already using agentic AI for lead generation and outreach; sales planning; and customer engagement, retention and growth.3 Going even further, 74% of CFOs in an August 2025 Salesforce survey believed that agentic AI will transform their business model.4 The opportunities and uncertainties are both substantial, and to stay ahead, companies need leaders who can navigate both.
What boards should do
- Rethink risk appetite to account for the risk of not moving boldly enough. Fully consider the potential for upside rewards when weighing them against the risks.
- Work with management to identify clear signals, such as increased investment in competitors that specialize in AI, that should prompt a strategy review. Ask management to monitor these signals and keep the board informed when they arise.
- Ask management to examine the P&L to identify focused areas where AI can yield dramatic improvements.
- Evaluate each member of management on KPIs tied to their role in driving value from AI within the agreed-upon risk appetite (such as launching an AI-driven product or service, using AI to enhance workflow and risk intelligence, or developing critical AI talent).
2. Reduce entry-level work, not entry-level talent
Leaders must adapt talent planning for an AI-augmented workforce. As automation takes over routine, rules-based tasks, entry-level roles—once the gateway for new talent—are shrinking fast. AI can already handle 50%–60% of typical entry-level tasks such as drafting reports, synthesizing research and cleansing data,5 and companies are responding by cutting or not filling these positions.6 Recent studies from both Harvard and Stanford confirm that entry-level employment has dropped meaningfully due to AI since 2022.7
This shift brings risks. Fewer junior staff means less opportunity to build future managers and supervisors with advanced leadership and critical thinking skills. That’s a risk on investors’ radars. In fact, just over half of the 19 investor stewardship leaders we interviewed raised concerns that the loss of experiential learning can lead to a brain drain that decreases expertise over the longer term. The loss of on-the-job learning can also erode innovation that comes from the “bottom up.” And fewer entry-level jobs also mean less upward mobility, which could contribute to economic and social challenges and prompt regulatory intervention.8
Considerations like these make it essential to balance efficiency and cost savings with long-term talent needs and the social license to operate. Companies still need entry-level workers—just not for the same tasks. The next generation, especially those who have been exploring AI since it first became widely available, brings fresh perspectives and a deep understanding of digital tools that can help organizations make the most of AI itself.
Board members can set the expectation that the firm’s human capital strategy must include rethinking what “entry-level” roles mean at all so the company can take full advantage of what entry-level talent has to offer. Refocusing younger talent on optimizing processes for AI and overseeing “digital workers” would take advantage of their digital savvy, while simulation-based learning, rotational assignments and apprenticeships with AI-augmented workers would develop their ability to bring context, judgment and creativity to the job. As AI takes away differences in executing basic tasks, it is this human judgment and creativity that will drive competitive advantage.
What boards should do
- Critically examine how management is balancing cost savings with managing the risks of long-term talent erosion, potential customer and investor backlash, and regulatory action.
- Tie executives’ compensation to how well they blend AI with human skills, using metrics such as employee engagement scores in hybrid teams that include both humans and AI agents.
- Set KPIs for the business around junior worker development, retention and advancement.
3. You can’t automate accountability
Human accountability and judgment remain central to protecting reputation and performance. AI promises speed, scale and smarter decisions, but it isn’t perfect. Take AI’s well-known potential for bias and hallucinations. A 2025 EY analysis of Fortune 100 10-K risk disclosures revealed that about 1 in 5 Fortune 100 companies (22%) now flag AI hallucinations, inaccuracies, misleading outputs, misinformation, disinformation, or bias as material risks.9 Less visible but still troubling is the phenomenon researchers have dubbed “workslop”: employees using AI to produce work that’s highly polished but inaccurate or lacking substance.10 The damage goes beyond reduced productivity and quality; it can increase risk exposures from failing to apply critical thought and challenge AI’s outputs.
The stakes are high. It’s no secret that organizations have lost both money and reputation due to careless AI use or unreliable AI behavior.11 In some cases, such as AI mistakes leading to unwarranted arrests or criminal convictions, these lapses have led directly to human harm.12
The common thread running through these risks is that humans remain accountable for the work they ask AI to do. Overseeing ethical AI is only the beginning—directors must encourage management to consider quality and liability. Individually, employees must carefully evaluate AI outputs and use AI to improve their work, not as a substitute. And organizationally, management must install safeguards to manage the risk of harm, use practices like robust red teaming and third-party assessments to test AI for unintended behaviors,13 and define how accountability will be assigned when AI-based outcomes go wrong.14
This is not just a question of staying out of trouble. Companies that get this right will be poised to turn accountability into trust and trust into a differentiator. Trust is currency today, and companies that bank on it will gain a competitive edge.
What boards should do
- Oversee management-level governance processes that safeguard the quality of AI-enabled work. For example, how is management assigning explicit accountability to specific individuals or departments for AI work quality.
- Ensure that the company uses robust testing to catch unintended behaviors or consequences of AI before they scale.
- Weigh the risk that AI can produce inappropriate or incorrect outputs when setting the organization’s risk tolerance.
- Discuss with management how they have assessed legal risk associated with AI-enabled outputs.