Consumer groups to CFPB: Make banks fairness-test AI lending software

CFPB
Joshua Roberts/Bloomberg

The Consumer Federation of America and Consumer Reports sent a letter to the Consumer Financial Protection Bureau on Wednesday, urging it to set ground rules for the use of artificial intelligence in lending. The groups' top recommendation was that banks should be required to search for less discriminatory alternatives. Later the same day, the CFPB issued a fair lending report that it is forcing lenders to make an effort to ensure their lending algorithms (AI based or not) meet fair lending laws. 

The issue of discrimination in AI-based lending has percolated for the dozen-plus years banks and fintechs have been using it. Consumer advocates and regulators worry that consumers could get harmed by AI-generated decisions, though some lenders using this software have reported increasing their lending to people in protected classes.

In an interview, Adam Rust, director of financial services at the Consumer Federation of America, said his organization is not reacting to complaints or problems that exist today, but trying to forestall the consumer harm that could come from the widespread use of AI-based lending if regulators do not put guardrails in place.

What drove the CFA to write the letter, he said, was a sense that a lot could be done to avoid the fairness through unawareness problem. 

"I am sure there are lenders out there who would contend that just because they can't determine the demographic makeup of an applicant, that it's not possible for their system to be discriminatory," Rust said. "I also think that there are champions inside banks that want to do the right thing, and something like guidance would really help them to make that argument."

AI-based lending may not be widespread now, but it will be more so in the future, Rust believes.

"There will be a point in time when AI is so complicated that we may not be able to understand the basis of decisions," he said. "If we can tell the models what they need to be doing before they get too smart, we should, and maybe that time is not as far away as we think."

While the CFPB has said that fair lending laws apply to algorithmic lending and has done some enforcement actions, "it has not laid out rules of the road for how it could be done correctly," Rust said.

The consumer groups "are asking all the right questions," said Kareem Saleh, founder of FairPlay, a provider of fairness testing software. This requirement to find a less discriminatory model is something the industry has been asking for, he said. 

"I think the bureau's posture has been, we're reluctant to articulate measures and thresholds because we're afraid people are going to game them," he said. "And that's not unreasonable."

In its report, the CFPB said it has been directing lenders to ensure compliance with the Equal Credit Opportunity Act and Regulation B, including by having them develop "a process for the consideration of a range of less discriminatory models." 

Industry reaction to the consumer advocates' letter and the CFPB's report was generally positive.

"It is encouraging to see the increased focus on less discriminatory alternative searches in the CFPB's Annual Fair Lending report and the CFA's letter," said Nat Hoopes, vice president and head of public policy and regulatory affairs at Upstart.

Responsible AI can make lending more inclusive and transparent, he said, and all lending companies should be regularly evaluating whether they are using the fairest possible approach to achieve their objectives.

"I fully agree with CFA and Consumer Reports that it would be useful to continue to get more guidance from the agencies so that institutions that want to do this well have some best practices that they can look to," said Stephen Hayes, partner at Relman Colfax, in an interview. "That will also help encourage a more level playing field."

Many lenders see LDA searches as a "check the box" exercise instead of an opportunity to leverage advances in technology to make better lending decisions," said Laura Kornhauser, CEO and co-founder of Stratyfy. "This is why we need stronger regulatory guidance on the search for and implementation of less discriminatory algorithms," she said.

The CFA and Consumer Reports did not try to define what they meant by "less discriminatory algorithm" in their letter.

"The key thing about the disparate impact doctrine is there is no ultimate best," Hayes said. "The doctrine is designed to encourage institutions, landlords, companies, employers to constantly be thinking about building a better system, building a less discriminatory system that still continues to meet their needs."

At TCF National Bank in Detroit and First National Bank of Omaha, early results from artificial intelligence pilot programs are strong.

October 21
Chart on lending approaches and artificial intelligence

Searching for a less discriminatory algorithm or alternative doesn't necessarily mean a lender has to use a completely different model than the one it's using, according to Saleh. It can modify an existing model. 

For instance, a lender might use a model that relies on conventional credit scores for 70% of its predictive power, and the remaining 30% might come from variables commonly found on a credit report, Saleh said. It could reduce the reliance on credit report data from 70% to 60% and tune up the influence of other variables to meet the requirement for a less discriminatory alternative. 

"It is an iterative process and what you start with is looking for fairer variants of the same model," Saleh said.

At one lender FairPlay works with, it did 100 searches for less discriminatory alternatives, Saleh said. All of the models his team found in its searches involved some fairness-accuracy trade-off — to be fairer, there had to be a diminution in accuracy.

But when the FairPlay team did 300 searches, it found models that were both accurate and fairer. 

"I think a key learning for us is there's almost always a fairer model if you invest the effort to look for it, but how much is enough, especially for smaller institutions that have limited resources?" Saleh said.

Sometimes a model will look more accurate and fairer in the lab, but not be accurate and fairer in the real world. 

"The real question is, is the less discriminatory algorithm viable and will it perform within my risk tolerance and also be fairer?" Saleh said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology CFPB News & Analysis
MORE FROM NATIONAL MORTGAGE NEWS