Is it possible for wealth managers to embrace AI while managing the risks?

Is it possible for wealth managers to embrace AI while managing the risks?

To better understand what the C-suite thinks about their organization’s use of AI, its risks and impact on the workplace, EY and FT Longitude conducted a “European Financial Services AI Pulse Survey”. Completed between March and June 2025, the survey gathered responses from 410 leaders in banking, insurance and wealth and asset management, representing organizations with assets from $1 billion to over $1 trillion across 16 countries. Among the respondents were leaders from Luxembourg. Here are some of the most interesting takeaways from the research across Europe.

Leaders are heavily investing in AI, but still not convinced they are prepared for tech-related risks

Financial services firms are investing heavily in AI training (88% report moderate to extensive investment), model testing and auditing (84%), and data access control (83%).

Yet, over half (57%) of firms (and 60% of wealth and asset managers) are concerned that their organization’s approach to technology-related risk is insufficient for emerging AI technologies. 

Notably, 30% of organizations have no or limited controls to ensure AI is free from bias. And, while most firms have some risk mitigation plans in place, only a little over half rely on internal audits (52%) to provide trust and confidence in their AI systems, though approaches vary by sector and region. After internal audits, firms employ consultation with industry experts and third-party AI model testing and validation as the next most common methods to provide trust and confidence in AI systems. The challenge lies in the scarcity of AI literate resources and the effort needed to train the workforce in a very new and dynamic field.

Organizations at more advanced stages of AI maturity (“Transforming” or “Leading”) feel better equipped to manage AI risks, but even among these, half believe their approach is still insufficient. Controls are reportedly strongest among banking and capital markets companies.

Only a third of wealth managers are comfortable with agentic AI, but are using it anyway

Over 40% of financial services firms (and 33% of wealth managers) are extremely or moderately familiar with agentic AI (the current state of the art for large language models). Despite this, already 35% of financial services firms (and over 40% of wealth managers) say they are currently using it, while 25% plan to implement it within the next six months. 

Considering the broader range of features, capabilities and potential use cases of AI (e.g., multimodal AI, synthetic data generation, quantum machine learning, autonomous robots, etc.), fewer than 50% of wealth managers (and financial services firms in general) are moderately or exceptionally familiar. Interestingly, autonomous robots are expected to see broader adoption over the next year. The latter being the intermediary steps before jumping into Agentic AI which will orchestrate complex workflows in order to deliver personal, efficient and scalable outcomes.

Fears of job losses and less intelligent work

Many leaders worry about AI’s potential to cause significant job losses, manipulate consumer perceptions, and generate false information (e.g., deepfakes). Concerns also extend to the negative impact on vulnerable groups in society.

Wealth and asset managers specifically are more concerned than banks or insurers about AI resulting in significant job losses, that AI will be used to manipulate how consumers think and feel and that AI will become uncontrollable without human insight. They also have the least trust in their consumers – only 32% of wealth managers agree that consumers trust that companies in their sector will manage AI in a way that aligns best with their interests. 

Many C-suite executives fear that excessive dependence on AI could diminish workforce cognitive abilities. There is also concern about accountability, transparency, ethics, data protection, cybersecurity and the potential for disinformation.

The industry’s strict regulatory requirements and the high data sensitivity add also more pressure on WM’s who are concerned by reputational risks tied to opaque or biased AI-driven financial decisions. 

How can AI be embraced while managing for risk?

Link with strategy

For wealth and asset managers, embracing AI effectively begins with a clear strategy that links AI initiatives directly to business objectives. Rather than experimenting with AI in isolation, firms should identify where automation, predictive analytics, and generative AI can create measurable value, from improving investment research to enhancing client personalization. Building a data foundation is critical here: high-quality, well-governed data ensures that AI models are accurate, auditable, and aligned with regulatory expectations. Leadership buy-in is equally important, as executives must set the tone for how AI is integrated into decision-making and client offerings.

Evolve frameworks with AI adoption

Risk management must evolve in parallel with AI adoption. Traditional risk frameworks may not fully capture the unique challenges of AI, such as model bias, explainability gaps and unintended consequences. Firms should establish cross-functional AI governance committees that include compliance, IT, investment professionals, and risk managers to evaluate potential impacts before deployment. Scenario testing, stress simulations, and ongoing monitoring can help identify vulnerabilities early, reducing the chance of reputational or regulatory fallout. Transparency with clients and stakeholders about how AI is used is also a growing expectation and can build trust.

Bring your people along with you

Finally, firms need to balance efficiency gains with the human expertise that underpins the industry. AI should augment rather than replace skilled professionals, allowing them to focus on higher-value activities like portfolio strategy and client relationships. 

The next few years will likely define which firms successfully harness AI to deliver better outcomes for clients. These findings highlight not only the growing investment in AI across the wealth and asset management industry, but also the persistent tension between innovation and risk, making it clear that firms must strike a careful balance as they move forward.

Summary 

The adoption of Artificial Intelligence (AI) is yielding significant financial benefits, with the majority of organizations reporting increased profits or reduced costs. Sectors like advanced manufacturing see notable gains, while the public sector and healthcare lag behind. Additionally, almost half of the  employees report productivity improvements due to AI, particularly among management. To effectively measure AI's impact, businesses must modernize their monitoring systems to capture real-time data on productivity and performance indicators, fostering better strategic decisions and bridging the perception gap between management and employees.

About this article

Authors

Related articles

Measurable impact: Organizations that use Artificial Intelligence in the right way increase profits and reduce costs

Artificial intelligence (AI) has come to stay. More and more people – whether in their private lives, their work routines, or both areas – are utilizing the opportunities presented by this new technology. The European AI Barometer 2025 shows, how their perspective on AI applications has changed over the past 12 months, how satisfied they are with the application of technology in their everyday work and where they still see challenges.

Ajay Bali + 2

Data protection in the AI-driven era

“Digital technologies, cybersecurity, and artificial intelligence are among the main pillars of the innovation ecosystem in Luxembourg,” states the Commission nationale pour la protection des données (CNPD) in its latest annual report.

How EY is navigating global AI compliance: The EU AI Act and beyond

EY is turning AI regulation into a strategic advantage. Learn more in this case study.