Ready or not, artificial intelligence seems poised to play a larger role in the home lending industry, but companies are trying to create their AI strategies in what are largely uncharted regulatory waters.
While some have already found ways to take advantage of
But while
The potential risk within generative AI corresponds directly to the quality and quantity of data contained within it, experts agree. A key challenge for developers lies in ensuring available data is sufficiently rich enough to provide accurate responses from the AI.
Inherent bias in some data models represents "a real danger" and could result in discriminatory practices and
"The black box of those algorithms can be very difficult to understand. There's very little transparency of what is this being built on as it's learning through," she said.
"Regulators are very much wanting to force lenders to understand what's going on inside those systems. And that's darn near impossible."
One home finance company striking up a conversation on AI chatbots is Providence, Rhode Island-based Beeline. In July, the company, whose products range from traditional refinances to investment-property loans,
Rather than drawing on the same sources of data ChatGPT taps into, Bob attempts to form answers based on what is within its own "brain," which is continually tested to ensure accuracy, said Jay Stockwell, Beeline's co-founder and chief marketing officer, who helped develop the proprietary platform. Much of the original data and answers fed to Bob's brain came from analysis of over 70,000 previous messages that came through an older Beeline chatbot.
"We took that big body of messages and then we did a cluster analysis based on what are the clusters of questions that people ask, because a lot of people ask more or less the same question but just in different ways," he said.
"We just went at it for months to provide really rich, clear answers to those exact questions." But Bob is constantly learning, Stockwell added.
"We review it every day. We go through and then we improve the brain, and run the same questions again and continually optimize this."
Among other safeguards Beeline has introduced to try to ensure the safety and accuracy of its AI are deployment of multiple AI models, including a constitutional version, to regularly check on its responses. Beeline also will not allow Bob to collect personal client data that could result in discriminatory bias and can "turn it dumb" when necessary, according to Stockwell.
"Bob doesn't know the name, doesn't know their email, doesn't know the location, doesn't know any of that. And then if we ever kicked them into a quote, that's removed out of the AI system."
Removing potential bias triggers is key because just as a lender would be held accountable when human personnel errs, the same will hold true if a chatbot does, Smith said.
"The regulators have been very clear that a violation is a violation, whether it's a chatbot or a live person, and that's where it becomes very difficult," she said.
Other guardrails placed around Bob are similarly no different to what certain human employees would require. "What would that person be held to, thinking of it as an unlicensed non-loan originator position?" said Jess Kennedy, another Beeline co-founder as well as its chief operating officer.
"All the things that you would train a human on, we said — hey, let's make sure Bob doesn't do any of these things either."
For instance, Bob, as well as any type of chatbot, is prohibited from doling out anything resembling financial or legal advice, a rule that will likely always be in place, according to
"There's going to be some clear lines in the sand because it's not a licensed entity. And in lending, you need to be licensed to do certain activities and that clearly isn't," he said.
But even with consumer safeguards in place, responses AI generate still hold the risk of unintentionally misguiding clients through the omission of some options.
"The problem is sometimes we have datasets — you're not going to fit into the majority of them if you are oftentimes a low-to-moderate income minority borrower. And so you've got to be very careful," Smith said.
Whereas lenders have made dedicated efforts in the past two years to open up homeownership opportunities to underserved communities, the AI black box may not hold the information about various affordability programs a human agent knows, Smith added.
"If your algorithm has been built upon datasets that have an inherent bias in them, you may end up having that same chatbot steer a prospective borrower to a product that's not good for them," Smith said. "It might steer them toward a subprime product when they are actually very eligible for a conventional product, especially if you're including special purpose credit programs or down payment assistance programs."
AI chatbots are also still not close to the point, nor intended or allowed by regulators, to be a full replacement for human interaction, developers say. "For today, it's a very reactive system," Hanson said. "It doesn't necessarily ask you the right questions."
To prevent user frustration from building, Beeline programs its chatbot to not provide any responses to queries it lacks answers for and also reads customer sentiment while the tool is being used. If it senses growing dissatisfaction or misunderstanding based on the language or punctuation in a message, Bob will instead send the user to a service agent.
"That was a key thing in terms of best practices," Kennedy said. "It's evolving so quickly, so we know the best practice, but there often hasn't been a method to do the best practice because it's so new.
As chatbots develop further and become "smarter," mortgage fintech leaders think they may possibly realign the workforce at lenders, taking over the customer service tasks which don't demand a licensed employee.
"We've been big believers in how the mortgage industry itself is ripe for this type of application, where you can take tons of this raw data and have AI go through it and better organize it, clean it, model it to really enhance the human beings that we have working," said Dan Snyder, co-founder and CEO of lending fintech Lower.
"I think you end up getting people like your customer service agents — instead of answering the same questions every day — they can move to higher-paying elevated jobs," he said. Snyder added that his company was exploring bringing AI tools to his business, either by resurrecting old blueprints the company initially developed a few years ago or adopting new models
The next two years should be a pivotal period that will determine the full potential of AI, Snyder noted. "You'll start to see a lot of the vendors that are launching AI into their existing product to maybe speed it up."
But that anticipated growth of consumer-facing AI platforms will likely occur in a still-fluid regulatory environment. Although a full set of rules might hold some lenders back, the opportunity that lies ahead makes development today vital to future growth, Beeline's Kennedy said. While the warnings agencies issued in the spring may not have spelled out regulatory details many would like, Beeline welcomed them, as it gave them hints at how to best proceed.
"We understand that if you're looking to innovate, there is inherent risk because you're on the bleeding edge of something that regulators and I mean, Congress, has yet to wrap their arms around," Kennedy said.
"Our goal is just to be transparent and as proactive as we possibly can be with the regulatory environment."