Regulators & GSEs On Mainstreaming Mortgage AI

AI won't revolutionize the mortgage world until originators, servicers, regulators, GSEs, and investors are all aligned. In this AI super session, you'll learn about today's top originator and servicer AI use cases, and hear regulators and GSEs explain what's required for AI to go mainstream across all lender operations from marketing and customer service to underwriting and capital markets. 


Transcription:

Maria Volkova (00:09):

Hi everyone. My name is Maria Volkova. I'm a Reporter for National Mortgage News. Thank you all for joining us for today's panel discussion on Market Ready AI Use Cases. Today we'll explore how AI applications have evolved, delve into some of Fannie Mae's, Freddy Max and FHFA's perspectives on ai. Look ahead at the future of tech innovation and touch a little bit on where AI is on the tech hype cycle. I'm joined today by Leah Price, Senior Financial Technology and Innovation Specialist at FHFA. Steve Holden Senior Vice President of Analytics at Fannie Mae and Brandon Rush, Senior VP and Head of Digital Experience at Freddie Mac. In my reporting on how mortgage lenders and servicers are adopting new technologies, many have highlighted the shifts in their use of tech over time. For instance, many lenders now rely on ChatGPT like bots to obtain relevant information. Additionally, companies with call centers have seen a significant transformation in how they analyze conversations. Rather than recording everything for human analysis, AI can now handle the task. So with that in mind, let's start this panel off by discussing how the use of AI has evolved over the past five to 10 years. Steve, would you like to start us off?

Steve Holden (01:48):

Sure. Thanks Maria. So thanks a lot for joining this panel today, everybody. I'm just going to do a quick grounding just to get us all sort of level set on AI, and I'm going to use very simple terminology, but I just want to get us all on the same page and then we'll go into the question. So I want you to think about AI just simply as technology that's enabling computers to do things that have typically and historically been the domain of human beings. And over time what we're seeing is computers are able to do more and more things that we've traditionally thought about as being things that humans do. And why are we all talking about AI? Now? We're talking about AI because there are three forces at play. Force number one is the amount of data available has exploded. Force number two is the amount of compute that's available again, has increased exponentially for many, many decades, but it's got to a point where it has extraordinary power.

(02:46):

And the third thing is the methodologies that are available have been evolving through time. And one of the charts that I use internally a lot is when you think about artificial intelligence, we went through a period of machine learning, more recently, deep learning, and then a couple of years ago, generative AI came along, which is sort of the new methodology out there. It's actually incrementally, from a technology perspective, it's an incremental step forward. But what's really interesting is it's unlocked a lot of capabilities. And so I'll just point really quickly to five things that gen AI does really well, and we've engaged a lot of our business teams internally around these capabilities to help them think about how this tech might help them accomplish their objectives. And so I'll just spin through this really quickly. Gen AI is really efficient and effective at translating information.

(03:38):

So you might think about that in terms of languages like from English to French or English to German or Spanish, but also it can translate documents from one format into another. So if you're a technologist operating in the realm of requirements, gen AI can convert requirements into code. For example, if you train it the right way, summarization is a common use case. You've probably seen taking massive amounts of information and summarizing it in a format that you want to receive it 500 pages into five bullet points. Data interrogation is one that I'm most excited about. This is the idea that without a deep set of technical skills, you can now just have an interaction through a prompt window or what have you, and just start asking questions about data and get your answers back. And so you don't need a set of python skills or SQL skills to really start to understand what the data is telling you.

(04:32):

Long tail search, or think of it as personalized or customized search search is getting a lot better with generative AI in particular because rather than providing keywords like you would with Google, you're actually going to provide context. And that context could be actually pretty deep, but in return for providing the context, you're going to get served up answers that are much more specific to the questions that you're asking. And then finally, of course, content generation, which you're all familiar with, things like creating music, creating video, or in fact creating prose, creating text. So those are some capabilities to think about as you start to get your head around what gen AI might be able to do in your business. So with all of that, I'm going to go back to your question, Maria, maybe I'll let Brandon start if that's okay.

Brandon Rush (05:15):

Okay. In terms of where we've sort of seen things in Freddie Mac, I would say sort of wallow or probably beyond five years, we've been using machine learning algorithms throughout our underwriting and our risk systems. So think about from the underwriting perspective across all aspects of how we assess borrower credit, borrower capacity, and the collateral. Really what's sort of changed, I would say in recent years to Steve's point, is just this massive availability of compute power. So we ourselves went to Amazon about five years ago. That enabled us to start running our models at a much greater scale than we were able to do before. And then what you've seen with generative AI is the ability to train models on a hundred billion plus parameters, which is something that we've never been able to do in the past.

(06:13):

And so where that sort of lands us today is we're at sort of a very kind of novel place. So one of the comments that Steve made was this ability to do data interrogation. Previously you had to have some knowledge of a quirky functional programming language or python to be able to navigate large data sets and extract value. Now you can use natural language prompts, and so you don't have to be a programmer or a quant, you just need to have some domain expertise. And now we're going to start seeing a world where we have natural language prompting opening up all this capability for people.

(06:56):

I'd say where it's given us pause is there's both opportunity and caution in this. It's ubiquitous in a way that capabilities like this have never been apart from Google search. And so there is the great opportunity, I think the caution is that these models are trained on very large data sets where we don't know all the inputs, and that's not something that we've typically done in the models that we've used. And so we need to slow down and develop some of our risk governance practices so that we can manage things like hallucinations and some other risks that we'll probably touch on today.

Maria Volkova (07:34):

Leah,

Leah Price (07:35):

Would you like to provide a comment? Sure. So you talked about how things have changed over the last 5 to 10 years. So I have two hats here. So on the one hand, FHFA as regulator and conservator of Fannie Mae and Freddie Mac, but then also we at FHFA are on our own internal AI journey, just like everybody else here, probably a couple things over the last five years, so I'm not sure if everybody knows, but FHFA was actually the first financial regulator to issue guidance. And we issued guidance in 2022 on AI machine learning risk management, and it actually takes years to put something out like that. So if that was 2022, imagine it was a couple of years before that. So we were ahead of the curve there. We were also the first financial regulator to host a generative AI tech sprint. So we feel like we have really been at the forefront of our regulator peers and trying to clarify what some of the risks and challenges are with this technology. So we would hope that that enables Fannie Mae, Freddie Mac, also the industry to participate in responsible innovation, which is that's our buzzword right now.

Maria Volkova (08:53):

And Steve, would you like to round this question?

Steve Holden (08:55):

Yeah. So over the last five years, I would point to two categories. Category one, and we heard about it a little bit in some of the prior panels today, but is this idea of there's an enormous amount of data that institutions just aren't using today because the data isn't stored in databases that are queryable that are accessible where you can search and access information quickly. And so this idea of taking unstructured data and structuring it or organizing in a way that you can access it quickly is something that AI has really unlocked and that technology has continue to improve actually even over the last couple of years. But for us, as an example, we sit on some 10 billion property images in our databases that we haven't been able to access until we started using deep learning algorithms to be able to teach computers to look at photographs the way a professional appraisal would look at a photograph.

(09:48):

And by doing that, you can very quickly go through images and tag them looking for things that an appraiser would typically tag if you had an appraiser look through the file. So for us, that structuring of data that's unstructured has unlocked new capabilities for us that we think have pretty big implications. Can I, sorry. And the second thing I'll just mention real quick is AI is really effective at prediction when you have a lot of data. So there's a sort of misnomer out there of, oh, I've got AI, I can predict anything. And a really important question to ask is what is the data that's being used? When you have an enormous amount of data, AI can play a significant role to improve prediction. So the second category I would give in addition to the structuring of unstructured data is the ability to have a much more fine grained ability to predict, but just remember that requires you to actually have a lot of data or else it won't be that much better than an ordinary statistical tool. Thanks.

Maria Volkova (10:50):

So Fannie and Freddie introduced AI driven underwriting systems, which have been lauded as a huge success by the mortgage industry. How have these systems advanced in terms of incorporating new AI applications?

Brandon Rush (11:09):

Probably one of the most important capabilities that we introduced a couple of years ago was our asset income modeler. And over time, essentially what this does is it allows us to source data from trusted third party sources, and we use it to verify income and asset information, which helps also reduce cycle time and actually reduce the cost to originate. We started with using machine learning techniques to be able to validate income using direct deposits. And what we've done over time is we've incorporated things like digitized pay stubs, transcripts, and other forms of asset data to give us a more comprehensive view.

(12:02):

I'd say another thing that we've invested in are some techniques that allow us to create fair outcomes for borrowers. And so we think a lot about that. We think how can we use this technology to support our mission duty to serve affordable goals, reach more borrowers? And so there is an adversarial de-biasing technique that we've applied in some of our models, which we've determined would enable us to create more accepts for protected classes without any material reduction in performance. And I think that's been one of the most impactful features that we've been able to do that sort of helped us with our mission.

Steve Holden (12:51):

Yeah, that's great. I would say that AI generally gets a lot of criticism about some of the risks around bias, and those are very legitimate, but Brandon's pointing out something really important, which is used correctly. You can actually do the exact opposite with ai. One of the trade-offs you get with AI is you can get this improved prediction, but you can also lose in terms of explainability. And that can be a real problem that one needs to consider when you're going to engage in AI in a technology solution. Something specifically we've done, to answer your question around around a US is for first time home buyers, we've used AI to be able to basically train a computer to look at a bank statement the way an underwriter would look at a bank statement and identify 12 months of timely rent payments. Rental history is not in a credit report most of the time.

(13:47):

If you go back a couple of years, you would see it a little under 5% of the time. Lately, that reporting has improved for a number of reasons that actually the GSEs have engaged around, but it's still below 10% of the time that you can see it there. And so by taking an asset report, by taking a bank statement and then having basically an algorithm read that statement and find that evidence of a rental payment, you can take that data and then you can update the credit score to reflect the fact that this first time home buyer has been successful at paying their monthly, their monthly rent payments. And that's a really important fact when you're considering a borrower's sustainability in a home mortgage. So that's an example of something we did and we have to date, I think there are 7,000 or so borrowers who would not have been able to get approved were it not for the fact that we're able to pull their rental history and use that in the underwriting assessment.

Maria Volkova (14:46):

Leah, you mentioned that the FHFA is going through an internal journey of using ai, and I wanted to ask, what pain points is AI currently addressing within the FHFA or that you're

Leah Price (15:02):

Alright, well, let me take a big step back and say, so President Biden late last year issued an executive order to all of the federal agencies. So nothing to do necessarily with Fannie Mae or Freddie Mac, but internally, all of the agencies need to assess where they are with artificial intelligence, come up with an inventory of use cases, come up with a proposed use cases that they want to use, and in September of this year, they need to post a compliance plan with the executive order, everybody had to select a Chief AI officer. So FHFA selected Tracy Stefan, who is my boss as our Chief AI officer. And so FHFA is extremely early in its AI journey. So we will, and we have, I believe, published our use cases. The number is extremely small of active uses of AI, but we're just really building up the muscle right now to understand what we need, what kinds of problems AI could solve.

(16:10):

The things that we bring up are the same things that Fannie, Freddie, everybody else brings up. So we're really excited about code generation. We are really excited about using gen AI to ingest public comments. We get a lot of public commentary to summarize that. So document summarization, a lot of what we do as a regulator is just review documents at the regulated entities. And if you think about other regulators, like regulators that regulate hundreds of organizations, they are ingesting significant numbers of documents every day. So I actually think the business of regulation, if you want to call it a business, could be pretty transformed, become much more efficient, and I think that's good for everyone, taxpayers, but also for the companies that are out there could make audits and reviews much more meaningful and impactful.

Maria Volkova (17:04):

Steve, I was hoping maybe you could talk a little bit about what are some of the pain points that AI is addressing internally inside of Fannie Mae?

Steve Holden (17:14):

Well, one of the things that we believe specific to generative ai, which I haven't really touched on yet, where we see a real opportunity internally is this concept of knowledge management. And so for those of you who are part of a company where you don't believe that information flows as efficiently as it could, I believe gen AI is going to come in and really upend that. And the reason why is because organizations sit on a massive amount of text data, and you just had Leah talk about that in terms of just feedback, commentary that you get from the public domain. But we all sit on this massive amount of textual information. And one of the opportunities that I think Gen AI is going to bring to the enterprise is this idea of being able to take textual information and store the meaning of that data into databases that again can be accessed and queried and have the information extracted.

(18:12):

And so if you've heard of some of this technology around vector databases and vector embeddings, and if you haven't, you can look it up on YouTube, but this is really, it's really a part of the whole generative AI sort of tech stack. I heard in an earlier session someone talked about rag architectures retrieval, augmented generation. So I feel comfortable bringing that up here since you took the one-on-one training. But this idea of being able take your internal data, vectorizing it, and making it easily accessible so that you can then have a gen AI large language model access it really efficiently. What that means is if you've ever got a question about a policy, about a procedure, about a customer feedback, whatever it is, it's going to be really easy to get to that information. And so to me, those are some of the pain points, again, from an internal dimension that we're looking forward to tackling.

Maria Volkova (19:13):

And Brandon, would you like to provide any insights?

Brandon Rush (19:17):

I actually agree with both with what both Steve and Leah said. I mean if I just sort of combine the two, one of the things we've thought about is even just putting all the advisory bulletins, all the regulatory guidance, all of our own internal findings to make sure as we make decisions on a go forward basis, that we have awareness of how those decisions may be impacted by this historical guidance. And it can be really hard to navigate, like Steve said. And so curated data I think is really important. When you just go out and use Chat GPT or use cloud, you're using data on the internet, you don't know exactly what the quality of the data is, but when you harness the natural language processing power of gene AI with well-managed knowledge bases, like Steve said, that's where the real power is. And the real power of that really comes from, and I think you also said this, if we can move away from a world where for somebody who's a domain expert has to go to a team to build a report, to run a query, to do something technical, and we take the friction out of asking questions around the data, that's really powerful.

(20:38):

When you can ask a question, it only takes you five seconds to get a response and then you realize, I didn't form that prompt correctly. Let me try it again. That changes the way you work. When that takes an hour or 30 minutes, it really gives you a pause if you're going to start to sort of try to query that data again. So I think that's where some of the power of this is really going to lie. I would say the other place that we probably didn't touch on that we're using it quite extensively is just monitoring our systems, our networks, our systems, cyber, any types of cyber vulnerabilities. Correlating all of that information is really hard. There's a lot of good tools out there that are getting better and better at this. We use them quite extensively. I think those are really important.

Maria Volkova (21:27):

Okay. Fannie and Freddie offer many programs, analytics, business intelligence to mortgage lenders from the time of loan origination to servicing. Can you guys maybe give a few examples of AI use cases in some of these offerings and maybe give us a peek of anything that's being developed or is currently being considered to be developed that will use Gen AI, Steve?

Steve Holden (21:54):

 Sure. So what I would say is whilst it's very easy to get excited about gen ai, the technology itself is still going through some growing pains. And so for example, the tools themselves are still pretty easy to hack. And so they can say some pretty harmful things. And if you read the press about Gen ai, you'll see lots of stories out there about gen AI products gone wrong. And so I think there's still a lot of risks that need to be addressed. And so the way we're thinking about this at Fannie Mae specific to Gen AI is to really be internally focused initially where we can really control the tools and we have internal codes of conduct and we have very specific sort of uses where we can deploy it and risk management practices and a very strict set of controls. The technology itself is evolving very, very quickly every week.

(22:59):

It seems like there's another advancement, and if you're paying attention, it's a pretty neat place to be keeping up with various YouTube channels and podcasts and things, but it still has some work to do in terms of being ready for the enterprise. And what I like to remind people is that it's really easy to gin up a gen AI science experiment project and show some people some really cool capabilities. And we've done a lot of that internally. But where the challenge really lies is taking those capabilities and scaling them responsibly, ethically and productively and mitigating some of those risks that are out there. And so again, we're really focused on low risk use cases that are internal and really productivity focused at this early phase, which may not be as exciting as maybe some of what the opportunities are that are out there, but we'll get there. Our engineers are really figuring out how to do this cost-effectively and risk appropriately, but in my opinion, we're not ready to start exposing this outside of our firewalls.

Maria Volkova (24:07):

Brandon?

Brandon Rush (24:08):

Yeah, what I would say is maybe some considerations and maybe some opportunities. I'd say one consideration, let's say from a tech stack perspective that we think about is where's the gravity of your data? So when we went into the cloud, it was hugely expensive to implement that control structure and security structure. And the big cloud providers have some differentiating capabilities, but they have a lot of commonality. And so just one thing that we think about that maybe you all could think about as well is to what extent is where the gravity of the data is in terms of any cloud provider in terms of the tech stack and the choices that you make? And without saying things that maybe aren't public yet or confidential for us, if I look at this great sheet that Steve put together, think about the possibilities with translation in terms of borrower education.

(25:05):

So think about our ability to provide translate mortgage documents to provide more education in a borrower's native language and how that would help us sort of further our goals and our mission. And in content generation side, Steve Lee and I had the opportunity to work together at the recent tech sprint that FHFA sponsored, and there were some really good ideas explored around content generation. So think about the ability to automate the generation of leases and how that might help smaller landlords think about the ability to use this to automate the generation of things like down payment assistant applications. As we know, that's one of the big barriers for borrowers. We have a product called DPA one, and something that we might think about here is how we could leverage this kind of capability to sort of extend and enhance something like that.

Maria Volkova (26:07):

Yeah, so Steve mentioned potential risks around generative AI. And given that there are many examples of gen AI being unreliable and frankly regurgitating a lot of false information sometimes, how can stakeholders in the mortgage industry, including housing agencies effectively regulate and use AI, gen AI, and how can they balance innovation with the needs of appropriate safeguards around this technology?

Leah Price (26:41):

We have a slide to that. No, we weren't sure if we would use this, but here we go.

Steve Holden (26:46):

Here you go. I can jump in. The way I talk about risk internally is think about AI not as a new risk entering your arena, but really that you already manage a lot of risks, and AI is going to fundamentally change each and every one of those risks. I think if you apply that lens to all the risks you already manage, you'll realize that actually ai, it touches everything. And so for example, on the compliance side, you're thinking about some of the legal risks around model biases or if you're thinking about intellectual property and maybe what is the data upon which these models are built, or maybe you're thinking about data leakage, and if some employee drops a bunch of proprietary information into a prompt window and submits it, what happens to that data? Is that going to get leaked out and show up on someone else's screen in another enterprise somewhere? And so on almost every risk dimension that you might be managing, AI has something to say about that. And so what I encourage people internally to think about is if you are a risk management professional, you've got to understand what AI is doing to your risk surface and how your strategies to mitigate and manage those risks are going to have to change because of ai. And so that's sort of the lens that I would place on it.

Maria Volkova (28:19):

Leah, would you like to?

Leah Price (28:20):

Well, I would just say that in financial services broadly, we are very experienced with AI risk management because none of this is new. I so actually feel like compared to the rest of the world and other industries, we're actually pretty well positioned to manage some of the risks that are out there versus industries where this is completely novel. Here on this slide, we have some of the specific gen AI risks that are out there that are new that I know that us we're thinking about what do we need to articulate to the world about how to think about hallucination or bias or privacy? It takes a long time, as I said in another panel to put out any guidance, but I think that'll be important for all of the regulators in the financial services industry to start to take a stand on what needs to happen at the regulated entities.

Maria Volkova (29:20):

Brandon, would you like to provide a perspective?

Brandon Rush (29:23):

And I think the hard thing here is you've got to find this right balance between continuing to experiment and then slowing down. And I think Steve sort of hit on this kind of focus on low risk use cases in the beginning while you sort of develop the muscles around this extended risk taxonomy. So one way we've thought about that is we created what we call an enterprise accelerator group, and we try to curate use cases, and this isn't intended to be long-term is just maybe for the next 12 to 18 months because we have a couple of people in the company who are very knowledgeable and passionate about this. And then we have our risk professionals, our legal professionals and others, and we sort of have to protect access to them. We think of them as scarce resources. And so we're sort of running our use cases through this accelerator right now so that we get our diversity so that we're experimenting with some different things, but we're also mindful of where we're making important technology decisions. Big bet decisions are things that you want to be very mindful of that you're making, and to be very cautious about doing it because that landscape is changing pretty rapidly.

Maria Volkova (30:44):

I wanted to pick your guys' brains on what emerging AI related trends or developments do you guys expect, do you anticipate in the future that are currently not present in today's landscape?

Steve Holden (31:01):

Well, Leah has the same answer as me, so I'll let her go first.

Leah Price (31:04):

Alright, well we were going to talk about the hype cycle and where gen AI is in the hype cycle, and then what we're excited about is that, okay, that's Okay. I took over.

Maria Volkova (31:12):

But really want to talk about the hype cycle.

Leah Price (31:16):

I love the hype cycle. Alright, it keeps changing. So last year in a panel with Maria, we talked about AI and I brought up blockchain. Blockchain, talking about the hype cycle and where we are. So just for those of you who don't know, Gartner has this way of thinking about emerging technologies and it's fun because it's kind of like any of us can think about technology in this way. So what is societies? What is the world's expectations on a given technology and how does that shift over time? So in this slide you'll see typically, and I'll talk about gen AI, there's some kind of an innovation trigger. So in gen ai, think about, okay, the trigger may have been ChatGPT became live in November, 2022. That was a big deal. Suddenly expectations for that technology go way up until there's a peak.

(32:04):

Then at some point, maybe something bad happens. In the case of blockchain, maybe billions of dollars are lost. FTX collapses. Everybody's like, yikes, don't want to have anything to do with that horrible technology. People just want to throw it in the garbage. Then it enters the trough of disillusionment, at which point you have some people that maybe have invested or have built companies that actually have really good use cases for the technology. And you start to see kind of a clearing of the winners. Alright, so these crypto companies that actually survived, or blockchain companies, they actually have really interesting valid use cases they've survived. They start to prove out their worth. And then ultimately over time there's the plateau of productivity where everybody's happy and these technology companies have survived and are helping the world be a better place. So Steve hates when I compare blockchain and crypto to gen ai. So that's why I had to bring that up. Alright, but Gen AI, I actually interviewed Gartner recently and asked them where they think we are on the hype cycle. So I'm actually curious, you guys, what do you think with gen AI?

Steve Holden (33:16):

I think there's enough press starting to come out that's starting to question where the big productivity gains are that we're starting to turn a corner,

Leah Price (33:25):

Which means where

Steve Holden (33:26):

We're entering the trough of disillusionment, but we're early in the trough. Yeah, we're just getting started.

Leah Price (33:31):

Gartner said the same thing. What do you think, Brandon?

Brandon Rush (33:38):

So I would say six to nine months ago, it was the biggest FOMO moment that I think we've all had in a while. And I think I've seen studies that have shown that maybe 10 or 15% of companies have implemented this technology at scale. You got maybe 50% are actively doing a lot of pilots and you still probably have 30 or 40% of people who haven't even started yet.

Leah Price (34:10):

Well then, okay, so I took over your question, but then you're

Maria Volkova (34:14):

Like, I saved the question. The bus was for last. It was about the height you jumped ahead of.

Leah Price (34:19):

So your question was, what are we excited about over the next five to 10 years? Alright, so I'll jump in again and say that I'm excited about AI agents. So AI agents is still in the innovation trigger area according to Gartner. So what is an AI agent? So you think about a more complex planner. So an example could be I want to go on a trip to the south of France. I could just tell my bot, maybe it's Siri. I want to plan me a trip and it knows all my preferences, it knows what my budget is, it knows who I want to travel with. It can book my flights, it can book my hotel, it can book dinner reservations, complex reasoning. So that's one potential use case for an AI agent. Another big one would be, okay, I want you to build me a multi-billion dollar business selling widgets to this market. And it goes off and comes up with a business plan, actually executes on this business, builds a software, does all this stuff.

(35:23):

So we are really far away from being able to do that. And I asked OpenAI, I am like, okay, so how far until it can at least plan my vacation? And they said they're working on it really hard. But in the meantime I read a study that said the best AI agents in the software engineering space are accurate or effective 13% of the time right now. So we have one three. So I wouldn't want to book a vacation that's only going to be 13%. Okay, so we're a long way away. But the way things are moving so quickly, maybe five to 10 years away we could see, hey, I want to buy, oops, I want to buy an apartment on the upper west side of Manhattan. Siri, help me do that. And it just takes care of all my letters of reference. It finds me a place, it takes care of the loan, does everything for me.

Maria Volkova (36:16):

We have about three minutes left.

Steve Holden (36:18):

Yeah, lemme double down on that answer and just say, I totally agree I was ready with the same answer. There's still a need to establish some protocols and standards to get agents to work correctly, but I want you to think about this in terms of humans interacting with computers. And what gen AI is going to do and is already doing is changing the way we interface with computers. I was just chatting with one of the vendors outside about how their graphical user interface to access their tool one day we're going to look at that and think that's really quite backward because we're just going to engage very differently. And you're already seeing that with Chat GPT, which is by the way, just one way that you can interface with a gen AI capability. But think about mortgage, think about origination, and think about servicing and think about all of the human to computer interactions that are taking place end to end during one of those engagements. That by the way, takes a period of time. And to Leah's point, this is where I believe that agents are really going to play an important, interesting role as the technology matures because all of those handoffs that you're seeing are going to be much more fluid and effortless and much more aligned to borrower's preferences. And I think that's going to really evolve how we interact in what is currently a very complicated industry for what's ultimately a pretty simple product of the fixed rate mortgage.

Maria Volkova (37:37):

Brandon, any final food for thought?

Brandon Rush (37:39):

I'll just add two words. I think edge computing. So this device is going to get more and more powerful. You're going to see more and more of that processing happen here right now, especially for large language models, it does rely on the use of cloud scale infrastructure. I think the other thing that is really quite powerful and a lot of what Steve and Leah were talking about is sort of productivity. There's a basic thing we could do today, but the thing that we have to overcome are things like legal and privacy concerns. One of the most powerful things we could do is audio to text transcription of meetings and summarization.

Maria Volkova (38:23):

On that note,

Brandon Rush (38:23):

Think about how more productive and concise and clear we could be if we could overcome a lot of those concerns.

Maria Volkova (38:30):

Okay, then on that note, thank you guys. Thanks. Thanks a lot. Yeah.