Trepidation is often the one of the first sentiments expressed when lenders are asked why they haven't yet implemented money-saving artificial intelligence tools. Given the regulatory oversight of the home lending industry, mortgage folks are all too aware of the fact that unintended consequences can cost them huge penalties. So how can they utilize this technology while ensuring that it doesn't run afoul of the rules? Kareem Saleh, the founder and CEO of FairPlay AI, a self-described fairness-as-a-service company, in this session will explain how financial institutions can use AI-powered tools to assess their automated decisioning models in minutes to increase both fairness and profits.
Transcript:
Maria Volkova (00:10):
Good afternoon. My name is Maria Volkova and I'm the technology reporter for National Mortgage News. Today we will be discussing two very timely subjects, fair lending and artificial intelligence. Specifically, we will dive into how technology can be used to bridge the home ownership gap and give certain borrowers sometimes overlooked during the underwriting process, the opportunity to become homeowners and how ultimately that can help a mortgage lender's bottom line. Joining me for the conversation is Kareem Slaleh, CEO of Fairplay. The company uses AI to ensure that the automated systems used by lenders don't discriminate based on race and gender. Hey Kareem, thanks for joining us.
Kareem Saleh (00:58):
Thanks for having me, Maria. Delighted to be here.
Maria Volkova (01:01):
Before we jump into questions, I'd like to remind our audience that we will be answering questions at the end of the conversation, so please send them our w ay. Kareem, to begin this conversation, can you talk a little bit about Fairplay and what life events pushed you to create your company and what issues is your product trying to address?
Kareem Saleh (01:27):
Thanks for the opportunity, Maria. As you noted, I am the founder and CEO of Fairplay. We cheekily refer to ourselves as the world's first fairness- as -a -service company, and we work with financial institutions to automate their fair lending, testing and reporting and to run second look and decline management programs designed to increase positive outcomes for historically disadvantaged groups. I've been working on this question of underwriting inherently hard to score borrowers my whole career. I got started doing that work in frontier emerging markets like Sub-Saharan Africa, Latin America, Eastern Europe, the Caribbean. And then for several years I was in the U.S. government at the State Department and at the Overseas Private Investment Corporation running programs that were designed to increase inclusion and promote development friendly outcomes for folks at the bottom of the economic pyramid. And what I discovered is that most of the decisioning in financial services today is kind of overfit to the majority population in part because we have an unfortunate history in America of financial exclusion through practices like redlining and others that kept certain protected classes prevented them from having equitable access to the financial system for many years.
(02:57):
And so as a result, much of the data that's available for certain segments of the population is messy, missing or wrong. And what we have found is that through the adoption of more advanced analytics that we can kind of quantify and correct for some of the biases that are found in traditional credit scoring and underwriting data. And so our tool, our fairness as a service solutions allow anybody making a high stakes decision about someone's life, like whether they should get a mortgage to answer five questions. Is my decision fair? If not, why not? Could it be fairer? What's the economic impact to our business of being fairer? And finally, did we give our declines the folks we rejected a second look to see if maybe we said no to somebody we ought to have approved. We're fortunate to be working with some of the biggest banks and financial technology companies in America to automate their fair lending testing and reporting and to optimize their decisions to be fairer within their risk tolerance.
Maria Volkova (04:11):
Before we jump into how AI can help in checking biases and lending, let's talk about the state of mortgage fairness today and understand it from a more macro perspective. How close are we at this point to closing the home ownership gap? And I guess how bad is financial exclusion in the mortgage industry at this point in time?
Kareem Saleh (04:39):
Yeah, so we run an annual mortgage fairness study based on publicly available data in the Home Mortgage Disclosure Act database, the HMDA database where we have computed the state of mortgage fairness for every protected group in America. Going back as far as there is data, which is around 1990. And what our findings show is that during the great financial crisis, mortgage fairness plunged to very deep levels. And I should add, by the way, when I use the word fairness in this context, I'm using a definition of fairness that's commonly used by courts and regulators to evaluate the fairness of a lending program that's called the adverse impact ratio. And it asks, at what rate does one group receive a positive outcome like approval for a mortgage relative to another group? And so our studies of the HMDA data show that in 2008 at the height of the great financial crisis, mortgage fairness plunged especially for Black and Native American groups to a rate of about 60%, which means that those groups were being approved at 60% relative to the control group.
(05:56):
Now since the great financial crisis, things have gotten much better. For example, at the height of the pandemic in kind of 2021 after considerable government intervention into the economy and into the mortgage market to keep folks in homes through things like Forbearances mortgage fairness reached for example 84.4% for black Americans and around 80% for Native Americans. So well off of the great financial crisis lows. But when we look over the longer horizon, 30 plus years back to 1990, what we find is that there's been no net improvement in mortgage fairness for most protected groups in America. And some protected groups like Native Americans have experienced very steep declines in mortgage fairness. So to give you some idea, in 1990, native Americans, native American home buyers were approved for mortgages at 95% the rate of the control group. In 2021, native Americans were approved at a rate of around 80% relative to the control group.
(07:12):
And then of course in 2022, interest rates started to rise and the economy and the economic outlook got shakier. And so we've actually now just crunched the 2022 data and our findings are worrisome. What we find is that in 2022, as interest rates rose and the economic outlook became more precarious, mortgage fairness plunged from 2021 levels on the order of about 10% for most groups. So in 2021, Black home buyers were approved for mortgages at around 84.4%. The rate of the control group in 2022 Black home buyers were approved at around 78% the rate of the control group. So a pretty steep plunge for all groups, but a disproportionately large plunge for Black Native American and Hispanic home buyers.
Maria Volkova (08:09):
Can you maybe explain a little bit why a high interest rate environment has an impact on lending disparities?
Kareem Saleh (08:18):
Sure. Well, I think it obviously makes affordability more difficult. So if you were kind of living at the margin of affordability before, higher interest rates may push you across over that threshold into the range of unaffordability. But also lenders have tightened their credit boxes. They have tightened their lending practices in part to account for uncertainty over the economic outlook, which is also part of the reason that interest rates have risen, right? So there's a complicated set of macro factors that have both made the cost of capital more expensive and the willingness of lenders to lend more restrained. And I suspect that those are the two primary forces that are driving this downturn in mortgage fairness that we saw from 1990 levels back towards crisis error levels, although they haven't reached that bottom quite yet.
Maria Volkova (09:23):
So there has been a lot of concerns from regulators including the CFPB, that systems that use AI, specifically automated systems, are biased. With that in mind, how can AI and machine learning actually work to solve disparities in housing and work to correct ingrained biases in automated systems used by lenders?
Kareem Saleh (09:49):
Yeah, so you're right. The regulatory focus on fairness issues in mortgage and really across consumer finance has increased tremendously. We see a lot of new consent orders about fair lending. The CFPB actually recently reported its annual fair lending activities to Congress, which it's required to do under the Equal Credit Opportunity Act. And what we saw in 2022 was triple digit increases in the number of institutions cited for fair lending violations, the number of exams for fair lending and other kind of regulatory oversight measures to suggest that fair lending supervision and enforcement is moving up the regulatory agenda. And the concern I think has been both around the algorithms themselves that are increasingly used in marketing and fraud detection and underwriting and pricing and valuation, as well as some of the data elements which may, as I said earlier, not be representative of certain groups, especially those groups who were historically excluded or preyed upon by the financial system.
(11:11):
Plus AI is thought to have this kind of black box quality, right? This idea that you don't know why the AI did what it did, you just know that it did it. And that obviously creates a lot of concerns for regulators who worry about the capacity of those systems to either negatively impact the safety and soundness of institutions that use them or to harm consumers. Now the good news is the regulators have also been very clear that they believe that there is promise for AI technologies. There was actually a joint statement put out a few years ago by all of the federal financial regulators talking about cashflow data and how cashflow data has the potential to increase inclusion and increase fairness. In part because it's seen as being much closer to the consumer balance sheet than perhaps some traditional data sources that have been used in mortgage underwriting.
(12:18):
And we have a set of laws and historical compliance practices which actually are reasonably well suited to the advent of AI and the advent of big data in part because predictive models have been used in financial services for a very long time. So most of the mortgage originators who read National Mortgage News and who may be listening today probably have fair lending compliance programs that serve as a good foundation for what the regulators expect. It's just that they may need to be upgraded or modified if those institutions are starting to rely on more advanced algorithmic systems, alternative data sources, et cetera, to make sure that the fair lending compliance is commensurate with the sophistication of those tool sets.
Maria Volkova (13:17):
I'm hoping that you can maybe walk us through a little bit how your, walk us through how your platform can be used by mortgage lenders to increase fairness in the lending process and maybe talk about the second look feature that you have built into your platform that can increase the rate of minority borrowers actually qualifying for a mortgage.
Kareem Saleh (13:47):
So we make a sweep of fairness solutions and they fall broadly into two categories. The first is bias detection and the second is fairness optimization, bias detection are those first handful of questions I articulated earlier and it will be familiar to National Mortgage News readers and our listeners today as kind of the traditional fair lending compliance. So are my decisions fair? If not, why not? And could they be fairer? My guess is that most of the folks joining us today perform some inquiry into those questions. But what we've done is built a set of technologies that allow you to ask and answer those questions very quickly. And with intuitive dashboards and reporting that you don't need to be a fair lending compliance person to understand. And also with the capability to do things like monitor your fairness to make sure it doesn't degrade in an environment like the current one where you may be making adjustments to your credit policies and to set alerts so that if your fairness does degrade, you get a notification telling you that there's been some fairness, degradation and allow you to get ahead of it before it becomes a serious problem.
(15:12):
Fairness optimization, I think is the stuff that makes us really special and unique in the market. And there we offer two tools. The first is a fairness optimizer, which allows lenders essentially to tune their current or their primary underwriting strategies to be fairer or less disparate without affecting risk or at least in ways that are within their risk tolerance. So it's like could you tune your existing decisions to be fairer? You can also get the benefit of that fairness optimization technology in what we call second look. Second look is a set of algorithms that kind of sit behind each decision you make and double check it. And that is that technology is primarily used in underwriting and it effectively is used to double check underwriting decisions by re-underwriting declined applicants using models and strategies that have been tuned to be more sensitive to the distributional properties of those populations.
(16:24):
And so what that means is by using models that are tuned to be more sensitive to historically underserved groups and reflect their credit performance perhaps a little bit more finely, it allows you to find more good loans to people that you already marketed to that also yield an inclusion dividend. And so we like to say that second look is good for profits, good for people, and good for progress. It's not uncommon for folks using our second look tool to be able to increase their approval rates by 10%, increase their take rates. So optimize pricing for folks who were approved but maybe didn't take the loans because they were too expensive. So increase their approval rates by 10%, increase their take rates by 13% and increase their fairness and protected groups by 20%.
Maria Volkova (17:22):
You mentioned an interesting thing that lenders can also use it to be more fair and to make more money, which is something that's important for a business. And I was hoping you could expand a little bit upon how lenders can be more inclusive and yet remain profitable in doing so.
Kareem Saleh (17:44):
Yeah, so fundamentally what our tool does is reweight the variables that lenders are currently making their decisions on in ways that are designed to maximize their predictive power but minimize their disparity driving effect. So for example, we are working with a mortgage lender today and something like 70% of their underwriting decisions were being driven by conventional credit scores with the remaining 30% of their decisions being made up of variables that are commonly found on a credit report. And what we found was that by simply reducing the weight given to some of the conventional underwriting techniques and tuning down their influence a little bit, but not so much as to lose their predictive power, but also tuning up the influence of variables that are similarly predictive but have less of a disparity driving effect by optimizing the relative weights on the variables that are inputs to a lender's decision, we found that we could increase approval rates for protected classes within that lender's risk tolerance. And so fundamentally that's kind of what we do is use our knowledge of the different credit characteristics of certain groups in ways that enable us to set the weights on the variables in ways that make them the decisions more sensitive to populations that have not historically been well represented in the data.
Maria Volkova (19:43):
So there have been criticisms from industry stakeholders that relying on more alternative software will ultimately put some borrowers into debt that they can't pay off. What are your thoughts on this and how is your platform taking this concern into account?
Kareem Saleh (20:03):
Yeah, it's a very valid concern. We certainly don't want to saddle people with debts and ultimately foreclosures that set them back on their financial journey. And so we feel very strongly that it's important to conduct a very rigorous ability to pay assessment for any applicant. The question is just how do you conduct that ability to pay assessment in a way that is sensitive to populations for whom you have less data? And what we find is that by really doing a good job of understanding the credit performance of these subpopulations, that we can make the models more fair, but more fair to the consumer in the sense of giving them conducting an ability to pay assessment that assures that they will be successful in their mortgage journey and at the same time yields loans that perform within the lender's risk tolerance. This can be done by decreasing reliance on things like conventional credit scores and LTVs and increasing reliance on variables like property type, occupancy type, consistency of employment, et cetera. And what we find is that 25 to 33% of the highest scoring protected groups who get declined would've performed as well as the riskiest folks, most lenders approve. So I think the concern about not saddling people with debt that they can ill afford is a real one, but I also think that there is 25 to 33% of the decline population that probably would have performed as well as some of the riskiest folks that get loans today.
Maria Volkova (22:11):
Yeah, that's a very stark percentage to think about. FairPlay is at an interesting intersection between the current administration's push to amplify inclusion and home ownership opportunities, but also a push to create greater transparency and guardrails around machine learning and AI systems? How does this impact your work?
Kareem Saleh (22:42):
Yeah, I mean, we are doing this work at a time when there is obviously a big global conversation going on about the governance of algorithmic systems. And the good news for those of us who work in financial services is that there's actually a regulatory regime and a governance regime and a set of best practices that is established and that work pretty well for the AI era, albeit potentially with some need for upgrading and modification in a few places. And so actually those of us who work in financial services are well positioned to advise other domains that are using ai, but that may not have similarly mature governance regimes to harness these algorithms and to make sure that they are fit for use and safe, et cetera. And so in that respect, I think that because predictive models have been used for so long in financial services, most folks in the mortgage business, believe it or not, are actually relatively better off to deal with the introduction of AI and big data though, because they probably already have some form of fair lending compliance or some form of model risk management. And it's just now a matter of updating those policies and procedures and protocols to account for these new advanced systems.
Maria Volkova (24:32):
And as a stakeholder in the artificial intelligence space, there's obviously a possibility of greater scrutiny and regulations around automated systems. And what, in your opinion, is the best way to regulate these technologies without hurting innovation? And what should regulators keep in mind when building out potential regulatory frameworks?
Kareem Saleh (25:07):
Yeah, so fundamentally there's a handful of questions that you want to understand about these AI systems, and you don't have to be a technical person necessarily to understand them or even to know to be able to ask those questions. So ultimately what we're interested to know is what is the soundness of this algorithm? What are you using it for? Is it appropriate for that use? And what data did you use to build this algorithm? Why is that data appropriate? Is it even correct? Is it likely to be messy, missing or wrong for certain groups? What steps did you take to kind of validate this algorithm? Did you stress test it? How did you subject it to a bunch of different conditions and see how it performed? What kind of biases does it have? What kind of assumptions does it make? And then of course, there's a whole set of, okay, how will you make sure that this algorithm behaves as expected in the wild?
(26:31):
What will you do if it starts to perform in the real world in ways that cause harm either to your institution or to consumers? Will you have an incident plan? How quickly will you be able to hit the kill switch? And then finally, who's accountable? What throat is there to be choked to make sure that all of those questions have been asked and adequately answered? Who are the people that are responsible at a senior level inside of the organization for making sure that good governance and best practices are being applied to the use of these very powerful systems? But those are, you'll notice that there was not a single technical question there. These are all who why questions that any non-technical person can ask and then try to make judgements about whether or not the answers they're getting back are reasonable. And that's a key element of AI governance is this concept of effective challenge. You want to have somebody from the outside asking a bunch of pointed questions and seeing if the answers are reasonable.
Maria Volkova (27:53):
Apart from alternative underwriting, what are other tools or ways to address the fair lending problem and open up more opportunities for minority borrowers to become homeowners? And also going along with that, what barriers exist?
Kareem Saleh (28:14):
Yeah, there are so many potential strategies to increase positive outcomes for historically disadvantaged groups in the mortgage market. Apart from underwriting, one of the very big topics in Washington today and subjects for discussion amongst the financial federal regulators is Community reinvestment Act reform. How are we going to encourage investments into historically underserved communities? We're starting to see lenders do things like create down payment assistance programs for folks who might struggle. We're starting to see increasingly good credit building initiatives. There are things like community land trusts and rent to own programs and tax credits for the building of affordable housing. So I think that there are a wide range of potential policy and financial measures that can be taken apart from underwriting to increase positive outcomes for historically disadvantaged groups. But of course, there are barriers to implementing all of those measures. And they range from what is the funding source for those measures? How likely are they to be effective? There is skepticism about things like financial literacy and other kind of other programs that have been tried in the past, which maybe have not lived up to their promise. And so while there are a variety of options available to us, there is, I would think some combination of a lack of finance, a lack of political will, and some skepticism about efficacy that's keeping us from experimenting with a lot more potential solutions to this problem.
Maria Volkova (30:24):
And to maybe end our conversation on a brighter note, what evolving technologies are you keeping an eye on that you see have some sort of promise?
Kareem Saleh (30:37):
Yeah, I mean, I think that AI really is one of the single greatest technology revolutions of our time. Along with things like gene editing and advancements in robotics and space travel, I think we see the potential for AI to give us all coaches who are infinitely knowledgeable and infinitely compassionate and infinitely patient guiding all of us to be our best selves all the time. So imagine tools for folks who do valuation of homes and tools for folks who do marketing of mortgages and tools for folks who have to do lost MIT or restructurings, all of that. We have the ability to apply just so much more intelligence to the decisions we make across the board. And the question is like, are we going to apply that intelligence in a way that makes the world more rational and humane, or are we going to do it in a manner that's irresponsible and leads to more harm? And I think that the jury's still out on that. But the good news for those of us who work in financial services is that to the extent we want to use these tools to make more money and do more good in the world, we have the ability to do so as well as the frameworks to do so.
Maria Volkova (32:16):
Awesome. And on that note, I would like to thank our audience for joining our discussion, and thank you Kareem, for taking the time and chatting with us. I appreciate it.
Kareem Saleh (32:29):
Thank you so much for having me, Maria. It was super fun.