4. Collaborate to define leading practice
Successfully embedding AI in the compliance ecosystem requires commitment and collaboration across multiple stakeholders: firms, vendors, regulators and government. Collaborative efforts can underpin wider adoption, and identification of further benefits, but also set standards for appropriate governance and controls to manage the safe development and deployment of AI-enabled solutions.
Greater adoption, collaboration and increased guidance can help drive forward AI innovation and deployment. Broader adoption, underpinned by regulatory convergence, will also help avoid asymmetries in control effectiveness that could that could otherwise push illicit activity away from more innovative institutions and further under the radar.
5. Focus on data inputs and ethical implications
The input data used to train and operate AI is critical. Data quality is a major challenge for many financial institutions and often impacts the effectiveness and efficiency of AML controls. Projects need to assess data quality and its appropriateness for use by AI as part of the design and development phase, and also implement data management controls to monitor the ongoing data quality during operation.
Another challenge with input data (particularly training data sets) is bias and the ethics of both the use of this capability and the nature of the trained AI. Recent high profile examples have highlighted the possibility of unintended consequences when trained on uncontrolled data inputs.
6. Apply robust testing and validation
The greater the level of testing and independent challenge the more effective the solution is likely to be and the less operational risk it will present. Common model risk management frameworks include model validation and independent model review teams that could provide effective challenge. Similarly, testing techniques such as stress and sensitivity testing as well as a champion/challenger approaches can be leveraged.
More novel techniques for validating AI applications could be drawn for other domains such as the use of red teams, bug bounties and secret shopper type approaches that are leveraged in testing and ongoing enhancements to cyber controls.
7. Engage early, deploy incrementally, review regularly
AI can bring significant disruption to compliance processes and institutions’ operating model. Engaging stakeholders early, building a common vision and deploying incrementally can help drive more effective change, constructive feedback and ultimately trust in business stakeholders.
When moving AI into production, organizations need to consider the operational risks that require ongoing monitoring controls. An increasing concern with promoting AI into everyday use is the possibility of malicious manipulation or unintended misuse. Periodic validation activities, including review of business use and sensitivity testing, can help mitigate risk along with regular review of AI decisions. At the same time, expert rules-based systems can be used to provide an ongoing baseline to compare to and help to identify where AI decisions deviate from expected norms.
Conclusion: It’s time to act
The current AML approach is struggling to keep pace with modern money laundering activity. There is a real opportunity for AI not only to drive efficiencies, but more importantly to identify new and creative ways to tackle money laundering.
While AI continues to pose challenges and test our appetite for risk, the question all financial institutions should be asking: can we afford not to embrace AI in our AML? Ultimately, when integrated with the right strategy and with the right focus on building trust, innovating with AI must be seen as a risk worth taking.