Modern chic business people working in an incredible futuristic & original office space

Luxembourg Market Pulse

Navigating the new regulatory landscape: how the EU AI Act is transforming asset management


Artificial Intelligence (AI) has rapidly become a strategic pillar for the asset management industry. From portfolio optimization and risk modelling to client servicing, regulatory reporting and operational automation, AI is reshaping how fund managers operate and compete. But as these technologies become embedded in critical functions, the risks they introduce demand a robust regulatory response. In order to, among others, address these risks, the European Artificial Intelligence Act (AI Act) has been introduced.

1. AI Act in a nutshell

The AI Act forms a key component of the EU’s strategy to define, oversee, and regulate artificial intelligence systems and their societal impact. It introduces a risk‑based framework, with obligations varying according to the type of AI model and the role of the entity operating it.

The Act applies to entities that provide, deploy, import, or distribute AI systems.

As fund managers increasingly embed AI across their critical operating processes, they may fall within the scope of the AI Act. The regulation will come into effect progressively, in several phased stages.

Common uses of AI by fund managers

  • Portfolio optimization and asset allocation
  • Credit, liquidity and market risk modelling
  • AML and fraud detection systems
  • Suitability and appropriateness assessment
  • Automated client onboarding (KYC)
  • AI‑assisted investor communication
  • Real‑time market surveillance
  • Chatbots
  • Fraud detection
  • Credit scoring

2. A risk‑based framework

The AI Act classifies systems into four categories, as follows:

  1. Unacceptable risk – Forbidden by AI Act: Some AI systems are considered inherently harmful and are banned, these include manipulative psychological AI, social scoring systems, untargeted scraping of biometric data; manipulation to promote risky financial products; predictive policing punishing customers based on unevidenced threat, and prohibited with exceptions of specific uses  
  2. High risk – Subject to additional scrutiny and requirements in EU AI Act: Most AI applications in portfolio management, risk, and compliance fall into the high‑risk category, these include AI systems used for evaluating employee performance, AI-empowered surveillance detecting potential fraud attempts or fraud patterns, portfolio optimization engines, market, liquidity and credit risk models, AML/CFT monitoring systems, and automated decision‑making tools used for investor assessments
  3. Limited risk: These include chatbots, AI‑assisted client communication tools, automated investor interfaces
  4. Minimal or no risk: These include internal Generative AI to summarize public information/data, AI-enabled video games and spam filters

The framework applies to all AI systems, whether developed internally or from third-party providers. AI Act requires a continuous assessment of current systems to ensure compliance.

What is the impact of AI Act?

The AI Act establishes various obligations for service providers and users depending on the types of AI they use and offer, based on their level of risk, such as:

  • Risk assessment: All AI-powered systems and software that developers are using and or intending to offer to their clients will be subject to the AI Act. As a result, the AI Act requirements to conduct and be able to evidence a detailed risk assessment of each of these cases and verify what defined risk category they fall under as per the AI Act
  • AI documentation: The AI Act implies to maintains and updates detailed documentation on AI systems, particularly high-risk AI. High-risk AI will need to be registered in an EU database and have “conformity assessments” performed continuously to ensure safety and prevent regulatory violations throughout their lifecycle
  • Company wide training: Infractions of the AI Act can result in fines up to EUR 35 million or 7% of annual turnover, whichever is higher. As such, organizations should ensure that all employees are well-versed in the new requirements, how it applies to them, how AI can and cannot be used, and more
  • Establish AI governance: The AI Act requires to establish a proper Governance structure with regards to the handling of AI. Employees should be provided clear responsibilities and roles with regards to key AI practices to ensure that the bank can comply to all requirements and appropriately report the necessary disclosures (i.e., transparency reports) 

5. AI: a catalyst for strategic opportunities

AI is becoming a powerful catalyst for strategic opportunities in the asset‑management industry, enabling fund managers to unlock new sources of value across their businesses. By strengthening risk management through advanced scenario modelling, real‑time market surveillance, and automated anomaly detection, AI can equip fund managers with sharper insights and faster decision‑making capabilities. At the same time, generative AI may enhance client engagement with personalized reporting, more intuitive communication, and seamless service delivery. Significant operational efficiencies also emerge as AI accelerates KYC/AML checks, trade surveillance, document processing, and regulatory reporting. Finally, innovation is further amplified through regulatory sandboxes, which provide a safe environment to explore transformative solutions (from tokenized fund infrastructures to automated onboarding, dynamic portfolio construction engines, and advanced ESG analytics) helping fund managers experiment alongside supervisors and bring new ideas to market more quickly.

Key figures

  • 46% of investments in innovative technologies were mainly made at group level
  • 64% of respondents allow access for their employees to public GenAI tools (such as ChatGPT, Gemini, Claude)
  • 63% of entities using AI have a dedicated data science team
  • 92% of the reported AI use cases are only for internal use (i.e., not client-facing).
  • 61% of all use cases leverage GenAI technology, followed by Natural Language Processing (NLP) (30%), and machine learning (ML) (28%)
  • Only 40% of the entities that provide access to some or all of their employees have either implemented a specific GenAI policy or modified their existing Internet policy to explicitly address the use of GenAI tools
  • The top five use case categories are:
    • Search/summarize information: 43%
    • Process automation: 30%
    • Chatbot and virtual assistant: 27%
    • Text context generation: 27%
    • Translation: 19%

To better understand what the C-suite thinks about their organization’s use of AI, its risks and impact on the workplace, EY and FT Longitude conducted a “European Financial Services AI Pulse Survey”. Completed between March and June 2025, the survey gathered responses from 410 leaders in banking, insurance and wealth and asset management, representing organizations with assets from USD 1 billion to over USD 1 trillion across 16 countries. Among the respondents were leaders from Luxembourg. Here are some of the most interesting takeaways from the research across Europe.

  • Leaders are heavily investing in AI, but still not convinced they are prepared for tech-related risks
  • Only a third of wealth managers are comfortable with agentic AI, but are using it anyway
  • Many leaders worry about AI’s potential to cause significant job losses, manipulate consumer perceptions, and generate false information (e.g., deepfakes).

To fully capture these benefits, fund managers need a structured approach to adopting AI across their organizations. Real strategic advantage comes when AI is aligned with business priorities, supported by strong governance, and embedded into day‑to‑day ways of working. The following considerations outline how fund managers can effectively embrace AI and position their firms for long‑term, sustainable value creation:

  • Link with strategy: For wealth and asset managers, embracing AI effectively begins with a clear strategy that links AI initiatives directly to business objectives. Rather than experimenting with AI in isolation, firms should identify where automation, predictive analytics, and generative AI can create measurable value, from improving investment research to enhancing client personalization. Building a data foundation is critical here: high-quality, well-governed data ensures that AI models are accurate, auditable, and aligned with regulatory expectations. Leadership buy-in is equally important, as executives must set the tone for how AI is integrated into decision-making and client offerings.
  • Evolve frameworks with AI adoption: Risk management must evolve in parallel with AI adoption. Traditional risk frameworks may not fully capture the unique challenges of AI, such as model bias, explainability gaps and unintended consequences. Firms should establish cross-functional AI governance committees that include compliance, IT, investment professionals, and risk managers to evaluate potential impacts before deployment. Scenario testing, stress simulations, and ongoing monitoring can help identify vulnerabilities early, reducing the chance of reputational or regulatory fallout. Transparency with clients and stakeholders about how AI is used is also a growing expectation and can build trust.
  • Bring your people along with you: Fund managers need to balance efficiency gains with the human expertise that underpins the industry. AI should augment rather than replace skilled professionals, allowing them to focus on higher-value activities like portfolio strategy and client relationships.

To complement these strategic enablers, fund managers should also consider a number of regulatory‑focused actions to ensure responsible and compliant AI adoption under the AI Act. Beyond aligning AI with business priorities, strong data foundations, and effective change management, fund managers will need to put in place additional measures that reinforce governance, transparency, and oversight across all AI‑driven activities. Key recommendations include:

  • Identify and classify all AI systems: Map all existing and planned AI use cases across the organization and classify them according to the AI Act’s risk categories
  • Strengthen AI governance and risk management: Establish clear governance structures, enhance model‑risk controls, and adapt existing risk frameworks to address AI‑specific issues such as bias, explainability, and unintended outcomes
  • Implement transparency measures: Ensure clients, regulators, and internal stakeholders understand how AI systems operate, their intended purpose, and any limitations or risks
  • Build and maintain an internal AI register: Create a central register capturing all AI systems, their risk classification, documentation, performance metrics, and compliance status
  • Prepare for supervisory scrutiny: Develop processes, documentation, and evidence to demonstrate responsible AI development and deployment during regulatory reviews
  • Ensure third‑party vendor compliance: Assess and monitor external providers to confirm that outsourced or embedded AI tools meet the AI Act’s requirements, with appropriate contractual safeguards

5. Next steps 

The forthcoming Digital Omnibus initiative will play a material role in shaping how and how quickly the AI Act is implemented in practice. By seeking to rationalize and better sequence overlapping digital requirements across the EU regulation, the Digital Omnibus is expected to bring greater clarity on timelines, transitional arrangements and proportionality, particularly where the AI Act intersects with DORA, GDPR and existing financial services obligations. For asset managers, this may result in more coherent implementation phases, potential adjustments to certain compliance deadlines. While firms should not interpret the Digital Omnibus as a delay mechanism, it is likely to smooth the path toward compliance by reducing fragmentation and supervisory uncertainty.  

How EY can help

With EY.ai - a unifying platform, you get a partner that understands your business and industry, brings together a holistic ecosystem, and can seamlessly connect AI capabilities to help you drive AI-enabled business transformations:

  • Drawing on experience across strategy, transformation, risk, assurance and tax with diverse perspectives and insights, we deliver solutions that put humans at the center of transformation
  • From technology and business to academic pioneers, we connect into a diverse ecosystem helping ensure we can offer comprehensive AI insights and solutions
  • Through EY Fabric, one of the largest technology platforms globally, we help ensure seamless integration of leading-edge AI capabilities into comprehensive solutions

Our main solutions:

  • EY.ai Maturity Model: Strategically plan to close GenAI gaps, develop an efficient roadmap and responsibly harness new capabilities
  • EY.ai Confidence Index: Stress test your organization's full AI model lifecycle across our principles of Responsible AI
  • EY.ai Value Accelerator: Identify AI value creation opportunities in your enterprise that will drive measurable growth 

 

Market Pulse

Stay informed with the latest regulatory updates.

Market Pulse December March 2026

Summary

As AI adoption accelerates, so does regulatory attention. The EU AI Act represents a fundamental shift in how AI systems will be governed, assessed, and supervised across financial services.

About this article

Authors

Related articles

Navigating the new regulatory landscape: how the EU AI Act is transforming asset management

Artificial Intelligence (AI) has rapidly become a strategic pillar for the asset management industry. From portfolio optimization and risk modelling to client servicing, regulatory reporting and operational automation, AI is reshaping how fund managers operate and compete.