Female Computer Engineer Looking on the Two Displays while Working on the Computer

How to mitigate AI discrimination and bias in financial services

The Consumer Financial Protection Bureau is cracking down on AI practices used in consumer financial products and services.

In brief

  • The CFPB expanded its definition of “unfair” acts and practices to include discriminatory conduct, even by AI, to abide by anti-discrimination laws.
  • As laws and regulations around AI evolve, financial institutions can act today to adopt responsible practices around their technology.

Financial institutions are increasingly facing scrutiny from regulators to demonstrate accountability for protecting consumers from the adverse impacts of artificial intelligence (AI) strategies that may become embedded in financial products and services. Institutions can mitigate potential harm to consumers by identifying and removing forms of bias and discrimination that may be introduced through the adoption of AI and machine learning in data modeling and advanced decision-making practices.

Download full article on AI discrimination and bias in financial services

Regulatory background


The Consumer Financial Protection Bureau (CFPB or the Bureau) announced that it will expand the definition of “unfair” within the unfair, deceptive or abusive acts or practices (UDAAP) regulatory framework by applying the Consumer Financial Protection Act’s standard of unfairness to include conduct that it asserts is discriminatory. The CFPB also plans to review “models, algorithms and decision-making processes used in connection with consumer financial products and services.”¹


Further, the CFPB outlined that federal anti-discrimination laws require that adverse notice be provided to applicants with an explanation of the rationale applied for rejections, regardless of any reliance on data models using complex algorithms.² In addition, the Bureau issued an interpretative rule stating that digital marketers can be held liable for committing UDAAP and other consumer financial protection violations.³


The messaging from the CFPB, federal regulators and state lawmakers is increasingly clear that financial institutions are expected to hold themselves accountable for protecting consumers against forms of algorithmic bias and discrimination. It is the intent of regulators to scrutinize the decision-making roles that these technologies play in the marketing, underwriting, and support of financial products and services, and to hold firms liable when these practices fail to protect consumers from undue harm.

Evolving legal landscapes

Legislators and regulators are working to establish boundaries that govern the use of AI and protect the public. Responsibly implementing AI requires monitoring of these trends. According to “The State of State AI Policy (2021-2022 Legislative Session),” an annual list published by the Electronic Privacy Information Center, seven state and local AI-related bills were signed into law or took effect, eight AI-related bills were passed, and another 14 AI-related bills were introduced between 2021 and 2022.⁴

Actions to move toward responsible AI

1. Know the data: from source to table

The first line of defense against algorithmic bias is to have a clear understanding of the reasons and ways in which data is being collected, organized, processed and prepared for model consumption. AI-induced bias can be a difficult target to identify, as it can result from unseen factors embedded within the data that renders the modeling process to be unreliable or potentially harmful.

2. Test data labeling and proxies

Apply testing rigor to measure pretraining bias and optimize features and labels in the training data. For example, equality of opportunity measurements can observe whether the consumers who should qualify for an opportunity are equally likely to do so regardless of their group membership. Disparate impact measurements can gauge whether algorithmic decision-making processes impact population subgroups disproportionately and thereby disadvantage some subgroups relative to others.⁵

3. Analyze results and identify key risk areas

Systematically investigate and study results from testing to identify key risk areas for bias in the modeling process. Tag material data points for human reviewers who can assess machine-based outputs and help to reclassify results for greater effectiveness. Train machine-learning models based on qualitative evaluations and then apply them to the entire population to assist in bias detection, along with documenting historical incidents of bias and monitoring against unfair practices.

4. Independently verify and validate fairness in modeling

Engage a third-party organization that is not involved in the development of data modeling frameworks. Assess whether each product has been designed to meet requirements and specifications (e.g., technical, compliance, regulatory, legal), and confirm that any unintended algorithmic bias and discrimination has been identified and eliminated.

5. Harness the power of synthetic data

Safeguard sensitive information in accordance with data privacy laws. Help to improve modeling strength and solve for data bias through meticulous manufacturing of synthetic (artificial) data that replicates events or objects in the real world and removes risky variables that can induce forms of digital discrimination against protected classes.

Felix A. Sanchez has contributed to this article.


The CFPB dictates that responsible business conduct includes the self-examination and self-reporting of algorithmic modeling processes and their impacts to consumers. A proactive approach to such practices can allow financial institutions to increase trust with their consumers while also controlling the narrative with regulators and could possibly lead to favorable outcomes, such as fewer or no CFPB enforcement fines or penalties or remediation endorsements, limiting the number of violations with the ability for negotiation and extraordinary cooperation credits.

Related articles

How to stay ahead of credit card fee regulatory changes

The Consumer Financial Protection Bureau’s initiative to scrutinize “junk fees” will increase focus on penalty policies and late fees. Learn more.

How to prepare consumer finance today for regulatory enforcement tomorrow

Consumer finance organizations can be ready for regulatory inquiries by asking questions and gathering information on their institutions. Learn more.

    Contact us
    Like what you’ve seen? Get in touch to learn more.