Loan Think

Don’t let AI trigger a fair-lending violation

The use of artificial intelligence and machine learning poses both opportunities and risks for financial institutions.

While using such predictive techniques may mitigate consumer lending credit risk, financial institutions should be cognizant of the potential impacts of bias and its implications on fairness.

There is already a growing interest from lawmakers and regulators in how to oversee algorithmic decision making in the short span since banks have been adopting the new technology.

Just last week, Federal Deposit Insurance Corp. Chairman Jelena McWilliams called for regulatory guidance on how banks use and govern AI and machine learning. And some regulators have begun to crack down, with the Department of Housing and Urban Development recently finding Facebook in violation of the Fair Housing Act for using machine learning in ways that excluded and thus, discriminated against protected classes.

More recently, Sens. Elizabeth Warren, D-Mass., and Doug Jones, D-Ala., sent a letter to financial regulators including the Federal Reserve Board, the FDIC and the Consumer Financial Protection Bureau asking what the agencies were “doing to identify and combat lending discrimination by lenders who use algorithms for underwriting.”

The risks in using AI and machine learning are not isolated to one specific area of a financial institution’s risk program. Areas like operational risk, technology and model risks, legal, information technology and compliance will have to work together to form coherent programs that don’t overlap in considerations in order to properly mitigate the potential impact of bias or unfair treatment.

How bias emerges in AI and model outputs

There are many channels through which bias can enter the model and impact the output. For example, data inputs meant to train a supervised learning algorithm which are unrepresentative of the customer population that the model is to be used upon can cause sample selection bias.

Implicit biases that impact the data generation process may produce training data that is not appropriate for a model’s purpose or use. These issues are not limited to how the data is generated or how a particular sample is used to train a model.

Fair lending laws prohibit discrimination based on protected class characteristics for consumer credit products. Explainable techniques, such as logistic regression, are used to produce models in the credit decision process with little risk because lenders utilize risk management processes to review and suppress any protected class variables.

However, there are more complex methods that include variable interactions that may proxy for protected class characteristics even when these variables — and its correlating variables tied to a protected class — are meant to be suppressed in a model.

Within the model risk sphere, validation teams are primarily concerned with bias in data or outputs that may exacerbate legal or regulatory risk. Bias in a model may be acceptable depending on the purpose and use.

However, a clear path to assess biases present an opportunity to improve all models.

Validating outputs to assess bias risks

Many financial institutions are well positioned to manage these new risks through their model-risk management programs. Mature programs include testing and validating risks that may result or contribute to inaccurate output.

However, primary validation techniques will have to adjust to keep pace with the rate of technological change and the potential for unfair, biased or discriminatory decisions impacting consumers.

Financial institutions should gauge whether the data inputs are appropriate for the model’s purposes and uses. Academic and industry research has shifted toward evaluating pre- and post-processing techniques for machine learning and producing open source metrics to assess discrimination and biases.

Large corporate entities such as Google, IBM and Microsoft have all contributed to the growing literature. An open-source culture surrounding this topic has led to more easily accessible ways for statistical analysts to implement this testing.

The remaining challenges

While there are tools and validation techniques that help financial institutions mitigate the risks of bias in AI, challenges remain on the implementation of these tools. For example, there is a lack of clear regulatory guidance on acceptable practices for bias mitigation in algorithms.

Any potential guidance should clarify the acceptable trade-off between model accuracy and bias mitigation efforts. Regulators should also address whether the thresholds used in bias measurement metrics, as proposed by academics and the industry, provide comfort to the agencies that banks have implemented sufficient controls to eliminate discrimination.

This article originally appeared in American Banker.
For reprint and licensing requests for this article, click here.
Artificial intelligence Fair Housing Act Machine learning Big data HMDA Digital mortgages Jelena McWilliams FDIC HUD Google IBM Microsoft
MORE FROM NATIONAL MORTGAGE NEWS