TRACK SPONSORED BY: Black Knight
AI is highly hyped but operationalizing it safely in mortgage must go beyond the hype. What are the most relevant and urgent marketing, lead gen, and conversion AI use cases for lenders to operationalize in 2024? What moves the needle most for lender growth in these areas? What are the servicing use cases that matter most for customer retention. What are the compliance considerations for lenders and servicers?
Transcription:
Sandra Madigan (00:06):
Good afternoon and welcome to our second session of the Real World AI Use Cases Track. For those of you that are just joining us, my name is Sandra Madigan. I'm Black Knight's Chief Digital Officer. Black Knight is now part of the ICE Family. We're a proud sponsor of this session, and it is my pleasure to introduce to you our panelists. First we have Bonnie Sinnock, Capital Markets Editor For Digital Mortgage News. We have Krish Dhokia, Senior Vice President, marketing and Business Development at Kind Lending. And last, but definitely not least, Matt Jones, counsel. Mitchell Sandler, LLP. Welcome, and Take it away guys.
Bonnie Sinnock (00:55):
Thank you Sandra. Welcome to our panel on operationalizing AI for compliant mortgage marketing. The overall theme we're going to talk about is how artificial intelligence can potentially identify more loan prospects at a lower cost and what the near term opportunities and risks there are in using it for this purpose. From consumer direct to third party origination channels, we plan to discuss how AI gets used and what compliance sensitivities it has in the context of things like privacy rules and fair lending regulation. We'll also leave a little time at the end for audience Q and A. Let's start with Krish. Krish, you've looked at how this technology has been or might be used, so I wondered if you could explain a little bit about your background and tell us what your experience with it has been like.
Krish Dhokia (01:44):
Well, so I'm Krish Dhokia Kind Lending, and from a marketing perspective, we've really not scratched the surface. There are so many things, there's so many vernaculars, so many things that we just don't understand that we want to get everything shiny going. And then you have a lot of salespeople that are asking AI, AI, AI from the loan process itself. We understand that there's a lot of abilities, but there's also coupled with limitations today. Today we have very kind of basic foundational platforms and systems that without the foundations being built on, there is no AI possibilities from a marketing world, specifically from the creative side, there is a whole lot of marketing AI available, and I'd be lying if I said we didn't try to use a lot of it. So there's chat and there's creative platforms as far as copyrighted, sorry, non copyrighted graphic design elements. And we do employ as much as we possibly can without overstepping boundaries. But again, your question is specifically to the mortgage loan process, and there is no new AI being added today. We are doing a whole lot of research on it.
Bonnie Sinnock (03:20):
Yeah, it sounded like it's very experimental at this point.
Krish Dhokia (03:23):
Sure, sure. In fact, saw a bunch of vendors today that are utilizing AI for whether it's data and triggers and what have you, but a lot of that stuff is also kind of dependent on the technologies you have to support it and the understanding that legal and compliance have around it.
Bonnie Sinnock (03:47):
Alright, next, Matt, I wondered if you could tell us a little bit about your background and give us kind an overview of the compliance issues you've seen in AI driven mortgage marketing strategies.
Matt Jones (03:57):
Sure. Thanks Bonnie. Matt Jones. I worked for the law firm, Mitchell Sandler as an attorney, been representing mortgage lenders and other companies in the mortgage ecosystem for the past 15 years or so. And so excited to be here to talk about AI and compliance considerations today. So from a big picture here, as we all know right now, we don't have specific laws and regulations that have been designed specifically for ai. So we're kind of left with the current regulatory framework and legal framework that applies to everything we do on a daily basis and applying that to AI. So as Bonnie mentioned, from a big picture of where we are today, obviously from a regulatory environment perspective, fair lending is very, very important top priority for CFPB and other federal regulators. And so when you think about marketing use cases for AI, you've got to think about bias.
(05:14)
Obviously that's a word that keeps coming up over and over today, but also sort of a more explicit forms of fair lending risk as you're interacting with consumers through chatbots or however else. You got to make sure that you're not discouraging consumers in some way from, sorry, lemme fix that from applying for mortgages for example. And then there's also kind of the redlining concern where if you're using algorithms to identify new customers, you have to make sure that you're really monitoring your data and testing it to ensure that your intended use of the AI is not generating risk, be otherwise hidden to you.
Bonnie Sinnock (06:06):
And I want to get back to chatbots a little later, but I wanted to ask first about fair lending. I hear a lot of mixed things as far as AI use in marketing, whether it's a fair lending risk or whether you could actually produce more equitable practices if you had the right human intervention and guardrails. What's your view on that?
Matt Jones (06:26):
Yeah, so I think one of the great things about, and I'll go back to chatbots again. One of the great things about that is we talk about all these language capabilities and if you have been paying attention to a lot of the recent enforcement activity from the federal regulators, you'll notice in those that a lot of times when mortgage companies get in trouble, it's because they don't have LOS or marketing directed toward, for example, Spanish speaking consumers. So this is one area where chatbots can really improve that and mitigate that risk for lenders. I think that on the flip side and limited English proficiency programs have been encouraged by regulators recently while also kind of cautioning that there are UDAP risks if you don't do it in the right way. But obviously with the way that the CIF was updated earlier this year to ask for borrowers language preference, that feels like something where you could get a good use out of a chat bot if you're able to talk to consumers in their preferred language, for example. I will say just kind of along those lines, there are certain state laws that will tell you that if you start communicating with the consumer in a certain language, you also have to issue certain disclosures in that language. And so you have to be aware though of the state framework for state law framework for provision of foreign language services when you're going down that path as well.
Bonnie Sinnock (08:13):
I had another question about chatbots as far as you have some of these personified chatbots fueled by AI and they take over a lot of the origination process. How can you use chatbots in communication with LOs who handle say rate disclosures and other compliance sensitive communications so that you're ensuring no line is crossed into licensable activity?
Matt Jones (08:37):
So I think that comes up to the extent you're using a third party chatbot to communicate with your customers. I do think that is something that could come down the line as potentially requiring companies that have these chatbots to acquire mortgage licenses for taking application data or negotiating rates and terms with customers. I think to the extent that you have chatbots that are speaking terms with customers, you need to be careful about that. Obviously you don't want to provide inaccurate information that could be seen as a UDAP risk. You have to think about trigger terms that might trigger additional disclosures if you tell a customer, for example, or so I think we've seen some clients that, for example, have implemented filter words or filter terms into their chat bots where if a consumer says a certain word to the bot, it gets routed to a human to interact with the consumer at that point. So that's kind of one way you can control for that risk, I think.
Bonnie Sinnock (10:03):
Alright. I wanted to move on to servicing. I think when we were preparing for this, you both had some comments about that as far as if you use AI marketing for retention and you hold servicing, maybe there's a strategy there, but it sounded like there's a complication where if you're selling MSRs with a non-solicitation agreement, there could be a concern. I wondered if you could both address that maybe starting with Krish.
Krish Dhokia (10:29):
I mean, I don't think this is AI specific. This is ML and whatever any triggers that are created for the originator to then contact past client that you may not be able to do for six months or whatever those terms are. There's always that issue. Now you add automation to it, and so you can definitely coach and train or turn things off, but if it's turned on and the rules and conditions have to be there, and just like you said that any of these technologies, so long as there's those stop gaps created to satisfy your organization's needs. Now from an MSR perspective, I mean there's so many. Some organizations sell, have so many different servicing contracts, it is almost a nightmare to manage that. I mean, I guess AI and technology could solve for it, but I don't know the robots take it over. It's a trust thing as well. The organizations have to be able to trust it. Compliance and legal have to have built those rules and conditions to then just let it go. And smaller organizations always don't have the ability to build that complex of a tool.
Bonnie Sinnock (11:56):
Great. Matt, do you have anything to add there?
Matt Jones (11:57):
Not really. I will say that obviously not the non-solicitation provisions in your mortgage loan purchase agreements and servicing agreements, they differ from investor to investor and to Kris's point, it can be hard to sort of develop that kind of one size fits all approach when you're marketing to your past customer base. We've seen some lenders that were able to successfully modify those non-solicitation provisions to either shorten the period within which the lenders prohibited from soliciting them from a loan. Typically refi's oftentimes we see the non-solicitation provisions will allow for purchase marketing, for example. So yeah, I mean it's definitely an issue. And so you do have to pay attention to where you sold your loan when you're marketing to that past customer. And if you can segment groups of past customers out based on that, then you should be able to kind of weave your way around that.
Bonnie Sinnock (13:08):
We've talked a lot about the challenges, but I wanted to get back to some of the benefits and efficiency comes up. What department would you say this creates opportunities for staffing efficiencies in if any?
Krish Dhokia (13:22):
Open question.
Bonnie Sinnock (13:23):
Yeah, Open question.
Krish Dhokia (13:25):
Everyone, every department can benefit from it. I mean, obviously I'm going to be biased and say marketing. I mean, I think I was talking to someone earlier and she said that it's their sixth person on their marketing team, but here recently I've heard of products and programs building AI for their guidelines to be able to be a smarter search internally. But I mean technology, your InfoSec, you can build AI to layer up and do a lot of internal things, but when consumers for the external component, so your salespeople specifically, I think that it is coming to a point where everyone knows that it exists and you can't ignore it and you also have to control it. So if the organization isn't doing the extensive research and trying to lay that foundation, there is a lot of time saved. They know it. The originators know that they can do a whole lot more if AI was introduced. But you kind of want to control that and be the introducer of it and not let them go off reservation and go rogue.
Bonnie Sinnock (14:48):
Yeah. I wondered what the upfront investment is like for you to get those benefits. We know the industry's resources are limited right now. What's the ROI and how quickly can it be achieved? Do either of you have an answer to that?
Krish Dhokia (15:03):
Sorry. No, I don't.
Matt Jones (15:06):
I mean to your point, yeah, it's very cost sensitive right now, obviously in the industry. I think that to the extent that you are investing in the automation of certain operational jobs, for example, whether it's underwriting or processing closer, extracting data out of documents, things like that, I think you're probably doing that as a proactive measure to prepare for the next rate drop or refi boom that comes in. Sort of reduce those headaches that we have of staffing up intensively and then unfortunately when rates go back up, having to cut back. So yeah, I mean I think that is a beautiful thing for the industry if we can get to that to a more stable staffing and be a little bit more proactive about the way that we're doing things as opposed to reacting to rates going up and down.
Krish Dhokia (16:15):
I would say that AI has always existed and some of the bigger companies have already have been solving for this for years just, and while the resources are limited, the vendors are limited and the options are limited. Today in the mortgage industry, it's more than we had a year or two ago. There's a lot of new players on the market and some of which are here in certain capacities. But while it's still in a sense cost prohibitive for a lot of organizations to jump on board, which I feel like will kind of taper off, like you said, when things normalize, when they normalize, it's still now we have options. It is only going to expand. The need is going to continue, and as things be a fair playing field as far as cost, I think more organizations are going to jump on. They don't have to build it proprietary anymore.
Bonnie Sinnock (17:14):
Another thing that came up when we were planning for this panel was the question of the different applications and different loan channels. Remember we talked about B2B for example, that would have the benefit of not having the privacy concerns related to consumer data. Is there a use case there and where do you see it being used between businesses today?
Krish Dhokia (17:38):
I mean, I'm assuming it's talking about the TPO broker channel specifically. I think there's a lot of opportunity there to pass on notifications and what have you to them because, and it depends on, again, the complexity of the servicing agreement, but for the most part, brokers are going to broker and part of their model is to, they want to be able to market to their past clients and have those triggers. Only problem is they don't have access to the data. Typically the lender has the access. So there is a lot of opportunity to pass those on as long as it's not a RESPA violation and those types of things. So again, we're looking at it from all different angles today to see how you can solve for it without crossing that threshold of being out of compliance.
Bonnie Sinnock (18:40):
Matt, do you have anything to add there?
Matt Jones (18:43):
I mean, you mentioned privacy. Obviously one of the big risks, we keep going back to chatbots, but one of the concerns there since we're on the topic is that for, you're exposed to data breaches, for example, if you have, chat logs that contain customer NPI, you have to be sure that one of the things to really mitigate your risk if you have these chat bots is develop policies and procedures for your employees. Make sure that you don't have anyone going out to Chat GPT and typing in customer personal information. Because after that it's not really, you don't know what's happening with the data once it goes into that system. So yeah, I mean I think there are certainly privacy concerns here with AI information security concerns. You've got to think about all those things. And I'm not sitting here telling you not to implement the technology, but the goal here is to tell you how to do it in a controlled manner, mitigate that risk, and hopefully be able to be successful.
(20:04)
Does it make sense to maybe start with the broker channel because you have kind of one less layer of risk because you don't have that consumer privacy issue? Yeah, I mean, I think anytime you're implementing a new technology like this, I was really involved with the implementation of e-closing and eNotes and remote online notarization for a company. And I think anytime you're launching a new technology, you have to do it in a controlled manner where you're kind of segmenting one portion of your business off to really kind of pilot it and then you learn from that and then hopefully get confident enough where you can start expanding the use of that within your company.
Bonnie Sinnock (20:50):
Alright, and my understanding is there is some of that out in the market being experimented with, is that correct?
Matt Jones (20:56):
Some of what? I'm sorry?
Bonnie Sinnock (20:57):
Some use of AI marketing in the B2B channel.
Krish Dhokia (21:03):
I mean at kind we, not that I know of.
Bonnie Sinnock (21:07):
Okay.
Krish Dhokia (21:09):
But that's not to say that there's testing internally being done just not external.
Bonnie Sinnock (21:19):
Okay. Yeah.
Krish Dhokia (21:20):
Gotcha.
Bonnie Sinnock (21:21):
How are we doing on time? Does anyone know? We're good.
(21:25)
So the Other thing I wanted to ask about was, I think, Matt, you had some suggestions for kind of guardrails for if you are going to work with AI and you've talked about some of them, but maybe you could go into more detail of some of the things you want to have in place if you're going to experiment with technology in this area.
Matt Jones (21:48):
Yeah, so like I said before, we don't have a specific set of laws and regulations designed for AI. We have some regulatory guidance. The OCC put out some pretty helpful guidance last year that applies I think across any financial company that's using AI. CFPB has been issuing a lot of highlights and spotlights and circulars lately on this. So we can kind of pull from all of that what some of the best practices are. I think one thing, and we talked if you were in the session before this, explainability is big. You can't have these from the regulator regulator's perspective, you can't have black box models and not be able to explain how decisions are made by the model, how the model was developed. You have to be able to explain that to a regulator. And from my perspective, to do that, you should document all of that to the extent you can.
(22:58)
I understand that sometimes if you're using third parties, you may not have access to all of the information behind their models, but documentation is key for when a regulator does come in and asks you to explain how things work, why there's no bias. And so that's kind of one way I think you can mitigate your risk with pretty much any AI solutions that you're implementing. Policies and procedures, I mentioned this before, but we've seen some clients who have marketing marketing teams or their los are going out to chat GPT for example, to develop marketing content and then lifting that to push out blog posts for example, or other kind of, so they're actually using it for content creation. And I think to the extent that that's happening within your organization, you want to have policies and procedures around that and make sure that that runs through your typical compliance approval process. So that's another way, I mean, we talked about it a little bit in the last session, but understanding your data, managing your data is huge. When you're training a model, for example, you don't want to have a really small data set that could, for example, exclude protected class pass borrowers, for example, or protected, somehow allow the model to correlate certain things with protected classes. I think that's a risk. So you do want to kind of broaden your data set. Obviously if you're, for example, trying to train a model to go out and identify new customer acquisitions, you're not going to want to include protected class characteristics in that data or anything that could potentially be correlated with that.
Bonnie Sinnock (25:02):
Do you have any particular, I don't know, enforcement actions or cases or litigation you're aware of that you could point to? There are specific examples of some of the potential pitfalls in this area, or is it kind of too early to see much of that?
Matt Jones (25:17):
I think it's a little bit early, but I also think you can apply, for example, we've all seen the many, many redlining enforcement actions that have been publicized over the last year especially, but also some of the confidential investigations and examinations that happen as well. First of all, the whole redlining analysis is going to start with a lender looking at its data in a particular MSA and trying to determine how do we stack up against peer lenders with 50 to 200% of our application volume in that MSA. So that's the first, the starting point is understanding what does our peer data look like? Do we have a statistically significant disparity in our application penetration in the majority minority census tracks in this area? If you don't, that's great, but if you do, then if you're implementing AI and it exacerbates that issue, that's something that you can easily see how that will make its way into some of these enforcement actions. Is Lender X implemented AI or machine learning model to identify new customers but didn't test its data to determine where the advertisements were being pushed out or who was receiving the advertisements and as a result, discourage prospective customers from the majority minority census tracks in that area from applying for a loan? That's the kind of thing you could see potentially making its way into a complaint.
Bonnie Sinnock (27:01):
Okay. I think we're at the point where we're getting close to the end and we want to see are there any questions out in the audience for us?
Audience Member 1 (27:14):
Thank you guys. This was helpful. So when I think of AI in the marketing space, marketing's not in mortgage specific, but marketing as a discipline has been really far out ahead of the curve on AI in a very early adopter, thinking back a decade use cases and targeting optimization in particular. I think what's new is with the LLMs large language models and GPT is all of the use cases around content and design that are really new. With that, the problems are really in quality, accuracy and even in imagination. So Krish, for you, what are you doing to upskill your team? How do you talk to them about how to interact with this, what new processes need to be put into place, how they take that and then make something useful out of it?
Krish Dhokia (28:06):
Just like we have to educate the auditors, you have to start internally. So educating a team level, setting what is and isn't allowed. If we're going to try to, for lack of better words, control what's happening out in the sales teams and then stopping them from going off reservation. You have to have those kind of parameters built internally. But what's interesting is graphic design, right? And graphic design is, I pretend to be one, but I'm not, and Canva, and there's all these tools out there that kind of help you through it. There was an article today about a young man who created his art piece, 674 prompts to build the most beautiful thing ever. And he tried to go and get a copyright and he didn't get it. The hammer came down. And so we have always worried about building these media dams and photos, Shutterstock and all this stuff that we house internally.
(29:13)
Cost has gone down exponentially and the risk has gone down a lot as well, that enforcement at least is we're trying to create unique things that mean things to other people, not just that same house image that everyone has with the American flag for loans. Every company has it. Also cash keys. Yes, exactly. And the keys and the little houses, the wooden boxes. I mean, it has opened up a whole new avenue of things from bandwidth that's met some of our needs from staffing and all of those things and speed. It's all been about speed. So when you mentioned blog posts, we can put out content faster, but all of this means is that whoever is responsible for using it, you still have those same protocols as far as compliance reviews and what have you. So great question.
Audience Member 2 (30:12):
Hey Krish, just a quick question about how AI can be leveraged with social media marketing. I imagine you guys have probably dabbled into that a little bit for a company that's a little scared, hasn't embraced it at all. Maybe there's some lessons you learned along the way. You might have some ambitious loan officers. We know in our industry sometimes people can go rogue in the field. So any words of wisdom in that? Maybe some success story on something that you've accomplished so far using AI for any kind of social media marketing?
Krish Dhokia (30:40):
I mean, there's already been machine learning through some of our scheduling tools and things like that. I wouldn't really call it all AI or any of AI. So no, I mean as far as the content creation, yes. But the campaigns and the deliverables and tactics that you're creating is still at the root, your human interaction right now. Again, the scheduling and the cadence and things like that. Maybe we're not using AI for that, but I'm sure there are tools to do that. But otherwise you lose unless you're training the model to match your brand and your voice and your tone to a hundred percent. And we live in a time where our social media content is based a day-to-day type of thing. Now because things are so changing so frequently. Do you need AI to post holiday posts? No, those are going to happen. But if you're talking market updates and things like that, yeah, we haven't scratched that and we don't plan it because we want to be the narrator of it. Yeah, we'll see if there's a tool to meet that need.
Bonnie Sinnock (31:59):
We have time for Probably one more question.
Matt Jones (32:06):
Is there another question I can't tell? Yeah, tell I was going to correct or modify something I said earlier. If there's not,
Bonnie Sinnock (32:14):
Go ahead.
Matt Jones (32:15):
Perfect, thank you. So Bonnie had asked about enforcement actions and the social media comment triggered my memory. There was the Facebook DOJ settlement last year where Facebook got in trouble for its marketing algorithm and the way it was determining which consumers were receiving which ads through its delivery system. So that settlement does have some applicability to what we're talking about today, and you can kind of see where things can get off the rails a little bit if you're not careful about how you're implementing you're marketing there. And then one other thing I just have to mention really quick, because I'm focused on keeping all of us out of trouble, but keep myself out of trouble too. I am supposed to say at the beginning of all of these that everything I say today, it should not be construed as legal advice. It's my own personal opinion. So rather than saying that at the beginning and having everybody tune me out for the rest of the time for saying that, I wanted to save it for the end.
Bonnie Sinnock (33:21):
Okay, Great. It looks like we have to start wrapping this up. So thanks to everyone who made this session happen today. I hope some of the topics we discussed pay off for all of you in your businesses. And please stay with us for our next session on this track, which is deep fakes and fraud, and look for more from Matt and other experts on artificial intelligence later at six during our AI Town hall and cocktail hour. Thank you.
Track 1 – Operationalizing AI For Compliant Mortgage Marketing
October 10, 2023 11:00 AM
33:56