Skip to Main Content

Increasing Fairness in Machine Learning Models: Best Practices for Overcoming Algorithmic Bias

All models, particularly machine learning (ML) models, run the risk of incorporating bias or unfairness in their outcomes. This bias is typically driven by the quality of the underlying data used to train or calibrate the model, which is often built using sensitive drivers, also called sensitive attributes, like age, gender, nationality, religion, race, language, culture, marital status, economic condition, zip code, and others.

Since these models are not only increasingly used to make important decisions in our financial lives but also in other aspects like granting university admission, assigning social benefits, predicting the risk of criminal recidivism, and sorting job applicants’ resumes, these biases have significant social and ethical implications. Moreover, and importantly for the financial services industry, discrimination based on most demographic factors is illegal under FHA and ECOA rules.

One example of how machine learning bias has played out comes from the field of facial recognition. Because of the lack of diversity in datasets used to train facial recognition software, models are prone to misidentification, especially among women and people of color. As an example of this, a system developed by IBM “was found to have misidentified gender in up to 7% of lighter-skinned females, up to 12% of darker-skinned males, and up to 35% of darker-skinned females.”

To mitigate the negative impact of this kind of algorithmic bias, Microsoft and Amazon are reportedly distancing themselves from the $8B facial recognition market until “stronger regulations [are] put in place to govern the ethical use of facial recognition technology,” while IBM plans on exiting the business altogether.

While ALM, AML, BSA, CECL, PPNR, and all manner of other acronymic financial models don’t deal with facial recognition, the lack of diversity in training data leads to similar issues in practice. To remedy the situation, here are some useful best practices for chipping away at the algorithmic bias many institutions have unintentionally built into their models.

Understand & Document the Model Thoroughly

  • Define the model’s overall purpose, assumptions, and limitations
  • Check the conceptual soundness and model objective function for unintended effects
  • Explore techniques to include “fairness” in the ML model objective function, and try to understand the limitations of the approach

Promote Algorithmic Transparency

  • Publish algorithms in the public domain and invite scrutiny from outside experts
  • Make ML models more transparent, explainable, interpretable, and less complex
  • Perform exploratory fairness analysis of data that results in unfair decision-making and incorporate pre-processing techniques when appropriate

Own the Model

  • Hold the developers and users accountable for the model’s decision-making, not the ML algorithm
  • Investigate ML models through algorithmic accountability reporting so they are no longer black boxes

Spread Algorithmic Awareness

  • Enhance awareness and knowledge about ML models to empower users
  • Raise awareness to identify possible biases in assumptions and ML model structure in consultation with lines of business and domain experts

Monitor, Validate, and Audit Early and Often

  • Perform monitoring of models on an ongoing basis
  • Validate and audit ML models by the second and third line of defense in accordance with SR 11-7 guidelines

Work with a Trusted Authority in Model Validation

In financial services, following the best practices outlined above may not be enough – neither for your business, nor for regulators. Rely on the help of a third-party model validation service like the RMA Model Validation Consortium (MVC) to assist your institution in identifying and correcting for algorithmic bias. 


Kevin Oden

As the Managing Director of the RMA Model Validation Consortium, Kevin is passionate about providing high-quality model validation services at a competitive price point for RMA member banks. Kevin holds a Ph.D. in math from UCLA and was a leader in risk management and model validation for Wells Fargo Bank.