Chapter 1
Unlocking AI’s potential for risk measurement: GAN-based VaR & ES
The use of generative adversarial networks can lead to a more accurate estimation of tail risk.
EY notes the ample growth potential for the deployment of AI in financial risk management, a field where the complexity of models inevitably creates opportunities to unleash this technology’s computational power. Indeed, while its current use in the calculation of value at risk (VaR) and expected shortfall (ES) is limited, we see strong indications suggesting that banks could improve risk measurement by harnessing AI’s power to handle large data sets and identify complex patterns. This is especially true if we consider the increasing importance of adequate modelling of systemic dependencies between risk factors, an example being the Fundamental Review of the Trading Book (FRTB) text’s prescription of a stressed ES accounting for a “joint assessment across all relevant risk factors, which will capture stressed correlation measures”3.
An example of AI’s potential is found in the much-researched use of generative adversarial networks (GANs) to simulate financial time series (e.g., Wiese et al., 20204), which can then be converted to returns for estimating VaR and ES. GANs belong to the broader category of generative AI, a term that encompasses various techniques used to generate new content using algorithms. While RL focuses on learning optimal decision-making policies in interactive environments with feedback, GANs specialize in generating realistic synthetic data. Unlike traditional VaR models that require simplifying assumptions, GANs enable the simulation of hypothetical, yet plausible, scenarios that are based on complex interdependencies learned from the training data. Indeed, research5 has demonstrated that a GAN-based VaR/ES model can provide “accurate tail risk estimates, and is able to capture certain stylized features observed in financial time series, such as heavy tails, and complex temporal and cross-asset dependence patterns” (Cont et al., 2023). This eliminates the need to either assume a distribution (e.g., as in Monte Carlo simulation) or to assume that future returns will be identical to those observed in the lookback window (e.g., as in historical simulation). Indeed, while the synthetic data is statistically similar to the training data, it maintains an element of variability due to the GAN generator starting with a random seed (noise). GANs may therefore provide better estimates of tail risk that traditional methods would struggle to achieve, especially when faced with data limitations.
Chapter 2
Unlocking AI’s potential for risk analysis: dynamic stress tests
AI-driven stress testing models can yield more dynamic, realistic simulations of stress scenarios.
Another application of AI in risk management is found in stress testing, a critical tool used by banks to evaluate their potential vulnerability to adverse events (often supplementing VaR and ES). Stress testing involves running simulations to evaluate how adverse scenarios would affect the bank’s balance sheet, capital adequacy, liquidity and overall financial health. Traditional methodologies typically involve a limited set of predetermined scenarios, relying heavily on human judgment both for scenario calibration and analysis of results.
Research7 has shown that AI can significantly transform stress testing by more effectively modelling the intercorrelation between PnL drivers, enhancing the dynamism and reliability of simulated scenarios. Indeed, stress tests are known to be constrained by computational limitations, resulting in the currently employed techniques often failing to adequately model non-linear relationships between risk factors. The stress models are often static, meaning, for example, that they inadequately capture the propagation of stress shocks between risk drivers, while also ignoring the effects of sequential managerial responses to a stress scenario’s unfolding.
AI techniques such as GAN promise a more expansive and plausible spectrum of scenarios, enabling the identification of complex dependencies that may otherwise be overlooked. Further, machine learning models can improve the accuracy with which key risk parameters (such as default probabilities) are estimated, and are also capable of modelling the path-dependent effects of actions put in place by other economic participants (e.g., regulators, industry competitors, etc). AI can thus be leveraged to improve the quality of stress modelling, while also streamlining the often-laborious processes needed to recalibrate the scenario narratives.
In addition, we note that regulatory expectations increasingly tend towards a higher granularity of stress tests, exemplified by the FRTB requirement of “a rigorous and comprehensive stress testing programme both at the trading desk level and at the bank-wide level.”3 This is in line with FRTB’s broader change in paradigm, whereby supervisory approval of IMA will be granted at the level of bank’s individual trading desks. If continued, such tendency may increase the computational burden on banks, paving the way for AI deployment. For example, banks could consider using RL algorithms to dynamically optimize stress scenarios, in order to expose the specific vulnerabilities of any given trading desk. By tailoring stress shocks to the desk’s evolving risk profile, they could monitor both systematic and residual risks more effectively, thus preventing spillovers to other areas of the business.
Chapter 3
A preliminary regulatory perspective
As regulators strive to keep pace with AI developments, we provide a concise preview of what can be expected.
Though regulators have yet to specifically address the treatment of risk applications outlined in Chapters 1 and 2, as early as 2020 FINMA had highlighted some of the risks that AI entails.8
As part of its 2022 annual report1, FINMA announced that it has formulated initial supervisory expectations concerning AI, with the aim to discuss them on an application-specific basis in 2023. Furthermore, it highlighted the key risk areas currently being targeted:
These come as no surprise, since AI models are often referred to as “black boxes”. For example, it may be difficult to interpret and explain which factors influenced a VaR estimate obtained using GANs. To this end, it is worth noting that version 1.0 of the “Artificial Intelligence Risk Management Framework”9, published by the US National Institute of Standards and Technology (NIST) in January 2023, provides distinct definitions for “transparency”, “explainability” and “interpretability” (see also “Four Principles of Explainable Artificial Intelligence” 10, also published by NIST).
Importantly, for an all-round perspective of the AI regulatory landscape, the effects of two upcoming regulations should be closely monitored: the European Union Artificial Intelligence Act (AI Act) and the revised Swiss Federal Act on Data Protection (revFADP). The former is currently being discussed in trilogue negotiations by EU co-legislators and, once adopted, a 24-month transition period will follow allowing organizations to implement the respective measures and obligations. The AI Act will likely have an extraterritorial impact on Swiss organizations that provide or use AI systems, even if they do not have a legal presence in the EU. Irrespective of the level of risk associated with specific types of AI software, which could be subject to legal interpretation, banks planning to leverage AI for financial risk management should ensure that their models are transparent and have adequate governance in place. In addition, banks must make sure that AI training is performed in compliance with the data protection requirements set out in the revFADP (coming into force on 1 September 2023) and, where relevant, the EU General Data Protection Regulation (GDPR).
Summary
Leveraging AI unlocks new opportunities for better risk management and greater operational efficiency. While banks must navigate challenges related to reliability, computational complexity and transparency, striking a balance between traditional methods and AI-based approaches will be key to retain competitiveness as the AI revolution unfolds.
Even though the initial investment may be substantial, the implied quality and efficiency gains are such that EY considers this a classic scenario in which first movers will reap the highest rewards.
Acknowledgements
We kindly thank Vadym Sheiko and Giovanni Facchini for their valuable contribution to this article.