Where's The ROI In Generative AI In 2025?

What generative AI is live and fully functional with real ROI today? Is it just marketing? Or are we progressing on other critical use cases like loan approvals, property valuation, customer education, borrower propensity modeling, interactive voice assistants, and other call center capabilities. Lenders and servicers share what's working, what's not, best practices, and tough lessons learned.


Transcription:

Steven Cooley (00:09):

Good Afternoon. How's everybody doing? All right. My name is Steven Cooley. I'm from Mortgage Advisor Tools. I'm going to introduce our panel real quick. We are going to talk about ROI and Gen AI today and marketing, so pretty excited. We have an incredible panel. This is Taranjeet Kaur from M&T Bank. Yes, and this is Alec Hanson from LoanDepot. And this is Dan Vasquez from Rocket Mortgage.

Dan Vasquez (00:34):

I correct.

Steven Cooley (00:35):

So far so good. And Gemma Currier from Guild Mortgage, so we're very excited. I'm going to read off some stats real quick to fire this thing off. This is pretty awesome information. I went to a tech conference right before this one I suppose, and learned some cool stuff. So according to McKinsey, 91% of employees use AI and like it, however, only 65% of businesses use generative AI regularly. Only 13% of businesses have multiple use cases for AI and only five and a quarter percent of businesses report a small gain and 2.2% of businesses report a large gain. And Gartner actually predicts that over 30% of all gen AI projects will be abandoned by 2025. So those are some pretty incredibly interesting statistics I think. And so I guess my first question for our panel is how do those statistics apply to your current business and how you use Gen AI today?

Dan Vasquez (01:41):

I'll start. I'm not too surprised to hear those numbers, to be quite honest. I think we're still so early in uncovering the right use cases and the right implementations of technology, the right orchestrations of various technology to really make good use of these innovations. I think a lot of the use cases that I see out there that are already in production, especially in this industry, I would equate it to if you just bought a brand new Ferrari but you only use it to drive the seven 11 on the corner, you'd probably be thinking like, well, it's kind of a cool car, but I don't know if I really need it. But think about how much else that car can do that you're not taking advantage of yet. I think that's where honestly most of us still are with AI. I'm not surprised, but that number will change. I predict.

Taranjeet Kaur (02:27):

So I second Dan in what we are saying. So we are in infancy of ai, I would say. Okay, yeah, that's better. All infancy of AI. And I also feel that it's like teaching a kid. So AI is a six to 7-year-old kid right now who knows a lot of thing and does not know the appropriate use of it. So that's why you would see the models that we are using today are going to be so much out of date in no time

(03:03):

Because we are going to learn a lot and we are going to improvise a lot. So that's why you see those numbers and they look accurate.

Steven Cooley (03:09):

Excellent.

Alec Hanson (03:10):

I get to play with it a lot more on the marketing side. We get to mess around with content creation, mess around with copy play with website creation, and that's kind of the fun easy stuff that's driving to seven 11. And so there's a lot of that happening right now in my world. When you get into servicing and underwriting and operations and that stuff, that's where I think we're all very young and just kind of toeing the water, figuring it out.

Gemma Currier (03:33):

Yeah, I mean I agree with all of you and I think that from innovation just generally it's a cycle. If we're thinking about innovating new technology, we know that we're going to keep testing things and piloting things and then what comes out of that is going to be something in the future. So to say 30% of projects are going to be abandoned. That doesn't surprise me because we know that we need to keep testing and piloting this type of technology before we really see the benefit and the ROI out of it,

Steven Cooley (04:05):

Especially concerning marketing. I mean social media had this same challenge and it was considered the black hole of marketing. Nobody could find the ROI and Alec a little bit about this. In fact, there was a stat a long time ago that if you're not involved in social media, you'll go out of business in five years. So that being said, what is a, have you had an ROI positive experience with AI or one that's projecting to be that you could identify today?

Alec Hanson (04:37):

Yeah, I can jump on that first. From speed to market and content creation, it's a massive enhancement

Steven Cooley (04:43):

Because

Alec Hanson (04:43):

The staffing required to create content at scale is now greatly reduced. And so the ability to move faster, to iterate quicker, to take pieces down and up faster, it's life changing. So it's going to radically enhance performance just from a content perspective. And then that immediately points to ROI because I can move a little more quickly and nimbly and I can empower my team to be more effective.

Dan Vasquez (05:08):

And across the enterprise, I think a lot of the use cases that are in production are really working like team member efficiency and it might look like small wins, but depending on the size of your org, if you make a client service rep 10% more effective throughout their day, it's a hell of a lot of ROI impact there. Absolutely. So we're seeing a lot of that. Yeah,

Gemma Currier (05:26):

We're using it. We've got an AI chatbot that we rolled out specifically for products in the last, I'd say couple of months that it's been out. We've had over 55,000 questions asked of it from our field. And so I think anecdotally we're seeing that our underwriters are saying they're receiving less questions. Our loan officers are saying they're able to get back to their customers and to their partners faster. And so we haven't been able to fully measure that ROI, but we're seeing an impact in behavior, but we're also seeing that we have people that don't trust it yet. So at first we started to see more questions to the product desk because they were like, is this answer? And so I think it just tells a story that we're in the very beginning stages of this and we have to build trust with our users and when we do, we'll start to see more momentum.

Dan Vasquez (06:20):

I think the use cases are definitely there though. I mean the previous panel brought up a few that are

(06:24):

Very top of mind for me when you talk about translation. I mean right now, at least speaking from my own company, we're mostly English speaking, most of our docs are in English. We do service Spanish as well as we can, but think about what a melting pot the US is and how much of a population of folks are out there where English isn't their first language or their preferred language. You talk about trust, how can you build trust with a company on a transaction as big as a mortgage and they don't even speak your language. And it's probably untenable to say that we're going to staff up a hundred different languages at Rocket Mortgage, but if AI can help us bridge that gap, build that trust, that's huge and capture those markets. I mean that's huge.

Taranjeet Kaur (07:07):

So one of the mortgage, big case between originations and servicing, the biggest use cases, the OCR reading, the natural language processing, I think most of the lenders at this point have explored that in some capacity to read a document, automatically do some optical character reading the OCR. And now with the invent of Gen AI, you can use that data and mimic a customer interaction based on the data points that you're reading. That's like a natural maturity for an AI model where you'll pull the data which was generated through ML machine learning and then you'll use that data to augment customer experience. So those are natural fits.

Steven Cooley (07:49):

Love that. So it is always kind of hard to talk about the experiments that have gone wrong, but I think it'd be helpful to, I think our listeners to understand an AI experiment that you would not recommend or you don't see working out in the near future. And so love to hear about out

Alec Hanson (08:08):

Our greatest failures. Solid start.

Steven Cooley (08:10):

Yeah, let's do a round of failures. Why not,

Alec Hanson (08:13):

Dan, I'll jump on a grenade on that for you. Yeah, I'll take it. Give me

Dan Vasquez (08:17):

A second to sift through the list.

Alec Hanson (08:20):

We have a very AI enabled servicing team. So AI came in early in servicing has been huge. ROI lift for us, but when I say AI enabled, you've heard about the AI, full AI audio, call it phone bots that you can talk to. We're not in that direction, but I'll tell you, I tried to bring it to the front for client interaction in social media and it wasn't capable of doing what we needed to do in the regulatory environment and it was just far inferior to a real person, which is hard to staff for, and that's part of the issue. We wanted to bring it forward and it just wasn't ready. And so it was a learning experience. Let's just put it that way.

Steven Cooley (09:00):

That's perfect. Love it.

Taranjeet Kaur (09:01):

So one of the most common AI model is GPT. So we all have used GPT four. If you ask ChatGPT is 9, 9, 9 9 and 109 is a prime number. It instantly says yes. It doesn't know it, it just says yes because GPT does not know when to say yes or when to say no yet. And that's why I call it an infant who doesn't know I have a 6-year-old when she started the word really, she would use really in every sentence and then parents or grandparents have to teach her, no, you don't use really here. You use really there. That's your GPT today.

(09:39):

The most famous AI model. So yes, you see wrongs like the prime number. If you ask GPT even right now, it'll say yes, 9, 9, 9 is a prime number. The way it learns from it is like you input it and our processing power increase over time, I think that's what going to add a significant volume to the correctness of AI models.

Dan Vasquez (10:00):

We worked for honestly years probably on solving the problem of explaining, summarizing mortgage guidelines in time context for our underwriters, which seems like a classic case for gen AI. In fact, I think there's multiple companies out there today that are actually doing a pretty good job of it and selling their services now. But we started this a couple of years ago. We learned real quickly that you need a very, very high degree of confidence to feel good putting this in front of a mortgage underwriter or somebody who's going to make a decision. And it took us a lot of iteration and experimentation to get to the point where we felt comfortable with that. Eventually we got there and that product is called Pathfinder now, but there were quite a few stops and starts complete rewrites multiple times in our journey to getting that one right.

Gemma Currier (10:49):

Yeah, I can share an example that isn't gen AI, but I think that the lesson we can bring forward, and a couple years ago we piloted some lead scoring models and we have a retail distributed loan officer base that we have an enterprise CRM. And so we wanted to test out if we gave our loan officers lead scoring, could they leverage that to convert at a different higher rate. And what we found out is that at the time it really didn't change behavior. So whether they had a high, medium low score, they were going to call that person anyways. So I think that the lesson is really understanding what are you trying to accomplish and is your organization ready? I think that if we would've had maybe more automation more than just the lead score at that time that we could have gotten more adoption. So just really understanding what you're trying to do and where your people are at and the functions and what they're actually doing today and what you're trying to change.

Steven Cooley (11:59):

Love that. So when assessing AI can be a considerable investment, it can be a small investment, and we're talking about generating ROI from this, what sometimes probably feels like an experiment. So I mean where would be a recommendation you would say if you're just entering AI, that you would suggest investing in an application or a process or a model? At what level and what are some of those investments look like?

Taranjeet Kaur (12:31):

I'll take that out. So I'll start with all my strategic initiatives where I have, and this would be different for each organization where you are strategically placed because if an organization, let's say they have hired LOS in certain segment, they know that these are high producers and they're going to create volumes. I would place an AI because I'm betting on them to support them so that they can stress customer interactions. So place your bet on your strategic initiatives when it comes to ai, just like any strategic initiative that you want to take.

(13:07):

Second option could be, sorry, the second option could be the human wherever you have more human processes. So be it OCR reading, be it legal and compliance, right? Be underwriting wherever you have high human human interaction. I would say place your bets on those areas where.

Dan Vasquez (13:27):

Yeah, I was going to say, if you're really just starting off your journey as a company, I would start with something even just off the shelf that you can put in your team members' hands and get them experimenting a ChatGPT enterprise, a Claude Enterprise, Microsoft copilot. Think you threw out a stat what was 15% of something team members or companies have even experimented with AI yet those are rookie numbers. You got to pump those numbers. Rookie numbers,

Steven Cooley (13:55):

13% have multiple use cases.

Dan Vasquez (13:58):

Yeah, we really believe that the innovation, the ideas, the use cases are going to really provide benefit to the enterprise. They're going to come from the users themselves. They're not going to come from my AI strategy think tank sitting up at the center in the ivory tower. They're going to come from a mortgage banker who has access to a tool that he can safely experiment with, and they're going to come to us and say, Hey, look at this thing that I'm doing that's saving my team 30% of their time or closing clients faster or more accurately. That's where the innovation's going to come from and you're not going to get those ideas unless you're giving folks the tools to figure it out.

Gemma Currier (14:32):

Yeah, Dan, I couldn't agree more. I think that providing a vendor that already has that in enterprise model that's safe is critical to all of our employees, not just field employees, but all of our employees because there's a lot of people that are very curious about gen ai and I think it's not just providing the tool, but providing the education, what's safe, what's not, customer privacy concerns, that kind of thing. Then also how do you prompt, how do you get curious, how do you share the best practices? We actually are just finishing a four week series on an AI automation bootcamp where we enlisted our coaches to really get in with us and come up with some of those use cases for loan officers, what they might use it for. So I think it's being curious

Alec Hanson (15:23):

For sure. I see some old dogs out there in the light here. Remember what automated underwriting came out? It was supposed to change the world. I remember it was supposed to revolutionize everything. Underwriters would be out of the job. And of course we still have underwriters. I think embracing new technology is a culturally significant piece of a company's operational, how they operate, and if you have a culture of people who are playing and experimenting and accepting it and learning, I think that's exactly what Dan was alluding to. That drives real change. I've tasked a lot of my creators and even the people outside to say, well, what tools are you using today to try to enhance your ability? Are you actively pursuing and finding new tools? If they're not doing it, then we're not doing it. And I totally agree. A ground up strategy rooted in safety is just really paramount to a culture,

Steven Cooley (16:12):

And these statistics were about all business sectors. They weren't specific to mortgage, which I think is really interesting. We give ourselves a hard time for not always being at the cusp of innovation, but I would say arguably that we're probably right where we're supposed to be. And 91% though, let's just say. So 91% of your folks are using AI as it stands, and so that's a lot of 1990 nines potentially. How do you reign that in and get a better understanding? I mean you touched on it a little bit, but there's high likelihood that they're all using some sort of ai, maybe not necessarily gen ai, but probably So how do you reign that in Project High it optimize it and ultimately drive ROI from it?

Alec Hanson (16:55):

I mean it sound like a little broken record here, but I think it's culturally how you run your team and how you run your organization. Are you having these conversations? Are you living in the technology? Are you discussing it? Are you exploring it or are you just head down business as usual solving the problem you used to solve every single day? And if you don't have that culture of innovation within your team, I feel like it's going to struggle. I think it's going to be hard for adoption, it's going to be hard for ivory tower people to say, here's a new tool. Use this. If they're not actively playing in it and figuring out themselves,

Dan Vasquez (17:22):

You're going to make it your business. I mean, because the problems that you want to solve, ideally, AI is great, but it's not like this magic hammer where everything becomes a nail at the end of the day when you get back to, well, how am I going to make the most money out of this thing? You're probably going to be looking at a lot of the same big problems and big client issues that you had five years ago before you ever heard about Gen ai or were working with AI at all, right? But now you have a new tool and you need your people to understand how to use that tool to solve that problem, maybe in a new way or more importantly, to understand when it's not the right time to use AI and don't waste your time on some of those mistakes that we talked about. Very real too.

Taranjeet Kaur (18:00):

So just to piggyback on what Dan was saying, when to not use ai, when to use ai, I would say if your data, because all of the intelligence comes from your data,

(18:11):

Not often talked about people just want to pick a tool from the market and make it like it's a magic wand and make it fix all my problems, that probably it's not going to happen. What's going to happen is you're going to use your data to the best of your ability with the AI model. So if your data is curated or even if it is structured or unstructured, it is curated enough for you to take decisions on it. That's when you want to really want to use AI models for that. Otherwise, your results are going to be 13% accurate.

Steven Cooley (18:44):

Well, let's talk about the chickens coming home to roast, right? The data part of it, data integrity and every analyst or CRM manager or head of marketing that's been begging their team to please enter in all the information, how is that? What's that experience? Because obviously there's a direct connection from the data integrity, data accuracy, data quality, to even seeing an optimized result from an AI application. So what's your experience been with that and how have you likely had to fix it?

Dan Vasquez (19:20):

I'll open up with that. To be honest, it's been, I mean, yes, it's a new challenge and it brings maybe some existing issues that were already under the surface in most data orgs to light. But for us, it's been a bit of a godsend. I'll be honest. We have at Rocket, we have a lot of data. We have a lot of data sources. Our data footprint is growing exponentially every day. It's difficult to keep all of that well governed, well maintained, usable for analytics, usable for research. And what we've had to do in order to make use of our data footprint for Gen AI is really get deep with the business and with the stakeholders who can fully explain, okay, what is your context? When you ask me this question, what are you expecting some of the inputs to come from? What are you expecting this word in your question to exactly mean?

(20:14):

And you'll find that even within the same leadership chain and the same refi banking org, this VP, when they ask a question, they mean a completely different thing than the person sitting next to them. And previously that led to all kinds of feedback of, oh, our data's not good. We can't trust this. And now we're getting to the root of those issues and finally curating the right data sets for the purpose of using with gen AI. But then also it's paying, now we have this data set, now we can use it for ml, now we can use it for traditional analytics, and it forced us to go through that exercise and it's leaving us with a significantly better data organization as a result.

Gemma Currier (20:55):

That's amazing. I think there's that enterprise data plan and infrastructure, and then there is the boots on the street, CRM, what's happening with the data and getting our loan officers to actually put in information. And of course you have to have everything synced together so that you've got real-time information between all these things in order to create marketing impactful campaigns and really use gen AI. But I think this actually gives us a carrot. If we can create the promise of continuing to leverage this type of technology and create personalization and give them segmentation in their fingertips, maybe they'll put more information in the CRM, so it's part of any sales and marketing person's world. One of our many challenges is to get our sales teams to put more information in the CRM, but maybe this will help lead the horse to water.

Taranjeet Kaur (22:01):

So I met my CFO just last week and we were just talking about things that are top concern for 2025, and he said, data is number three on my list on the top concerns. And it speaks because no analytics or AI can be possible without you emphasizing on data. So yes, data strategy should lead AI, and that's where organizations are heading. And there comes your ROI. If you're not clear with your data, I think your ROI is going to be diminishing.

Steven Cooley (22:35):

Have you had to staff or find third party vendors to assist you with any of these AI projects? I mean, all the things we've talked about probably involve three to five different positions from analysts to data scientists to

Alec Hanson (22:51):

Yeah. Yeah. So I agree with Dan on the last question of just these, getting your data in a great place is massively advantageous and it's going to lead to this. At the same time, having data in the right place and not using it is exactly a massive problem. And so we have a thousand distributed loan officers across the country and we have our call centers and all that business. And getting a loan officer to use a CRM and we partner with Salesforce, speaking of good partners is really difficult. But when you have prompt based communication and the ability to run reports, now I can take a 50, 55-year-old, 20 year veteran in the field, give 'em a tool they can actually use, and now the data becomes actionable and I get to come along behind them and prompt and suggest and put things in front of them that says, you should push this button and do some cool stuff, or I'll push it for you or just push it for you. And so those kinds of things are taking the data and putting it in a format where we can actually use it. Now, gen AI is going to come through and be that assistant to empower people to be more and more productive.

Dan Vasquez (23:52):

I think we lean really heavily into our partners in this space. I think you have to. It's such a big, specialized and new domain. I mean, if you have all of the talent in-House to do this and do this, well, God bless you. But that's a pretty big challenge. I think if I look at Slack, the top people I talk to, maybe three of 'em are from Rocket and the rest of the top 10 is either from Microsoft, Amazon, OpenAI, Android you have right now.

Gemma Currier (24:23):

Yeah, definitely. For Guild, we are partnership with vendors, technology companies, a team on site, and then really looking at co-creation. So hybrid, so partnering with the team and with a vendor together.

Steven Cooley (24:43):

You hire people.

Taranjeet Kaur (24:44):

Oh, yeah, not dedicated. I think Alex said it's a massive investment and for it to be a primary focus for your organization, we have not hired people, but we have vendors who are using AI and then prompt engineering. I think that's quite a bit.

Steven Cooley (25:06):

No, that's what I was going to touch on next. And I mean even from a marketing perspective, if you do marketing agencies and mortgage, you have to really get involved in that process because of compliance and whatnot, and AI's not going to be any different. And so Dan, you touched on it a bit earlier about prompts, and this kind of gets glazed over I think in AI conversations, the importance of developing the prompts, which ultimately lead create an objective and an outcome. And then in our instance, we have a compliance factor as well. So if we could chat a little bit about how you work with your third party vendors to assure that you are getting the outcomes that you want through prompt development and also maintaining a state of compliance.

Dan Vasquez (25:55):

It really is a bit of an art.

(25:58):

Anthropic recently made public their clawed system prompts. And if you haven't had a chance to check that out and you're interested, I would definitely look that up. It is very interesting reading in how the professionals are doing this. You get into the react pattern, you get into chain of thought reasoning, and you really start to instruct these things in explaining themselves well and following certain steps of logic. It's like you're teaching an intern and that's what you want. I mean, if you want this thing to act like a team member and act on your behalf, you want to train them like you would train a human. And it starts with the prompting, but where we're working with the partners and really diving, I think to that next level is prompting will get you very far, but it's not going to get you all the way.

(26:48):

At what point do you make the decision that it's time to pick a different model? It's time to fine tune, maybe even time to start exploring creating your own model. Although personally, I think that the use case for that, at least in this industry is probably not there yet. But if you had asked me six months ago, I'd say we will never do that. And if you asked me now, I'd say in five years I wouldn't be shocked, although I don't have a good reason to do so yet. It wouldn't surprise me if one showed up.

Gemma Currier (27:19):

From a prompt perspective, I think it's important with the AI chatbot that we've rolled out, having that thumbs down, that feedback loop back to the teams back to understand the content and how that's coming through half the time if not more, it's because the prompt was not really a prompt. And so really it's an education piece. So it's really having a dedicated person in creating a process or dedicated team in creating a process to where we can get that feedback loop and we know what's not working in the model and what is not working that we need to educate. And so I think that it's really about putting in infrastructure and it's just interesting that the jobs are changing out there already and it's like less people answering questions, maybe more people double checking the thing that's answering questions. So I think it's making sure that you've got a really strong process for how to do that.

Taranjeet Kaur (28:26):

We have not explored prompt engineering for mortgage, but we have copilot use cases like Microsoft 365 copilot, GitHub copilot. These ones are kind of low hanging fruits when it comes to prompt engineering and what you can do with a GPT kind of model where somebody's doing a peer programming with you or kind of writing an application with you. I think those would be something that everybody should explore if they're in an engineering world. Understood. Yeah,

Alec Hanson (28:55):

That's all I was going to say is I think prompt writing is a skillset that everybody's going to become better and better at over time. And it's like a new little language. I joke with my wife, she's the worst Googler in the world. She can't look anything up and just give me a few 20 years of marriage. I get to say that now, but I'll jump on Google. And within a few searches I found what we're looking for and we're moving on. So with prompting, yeah, some people will be terrible at it. They have never done it. They'll get false positives and false results. They'll get the wrong feedback, but better and better people and more competency will come into the models and it'll be a thing People in my opinion, will become pretty competent at.

Steven Cooley (29:33):

I'm betting it'll probably be a job.

Dan Vasquez (29:36):

It is a job. It's

Steven Cooley (29:39):

On every level I would imagine mean. And the compliance factor is another one. And I bring it back to social media. It didn't ring the bell of compliance offices until much later. Social media kind of ran wild and then all of a sudden people said the wrong things and then compliance got heavily involved. And so I mean, how do we navigate and control and manage AI and help our compliance folks understand what we're really signing up for?

Alec Hanson (30:10):

You really want to talk about compliance. It's a fun topic just to really drive it home before cocktails right into there

Steven Cooley (30:15):

Just for 60 seconds. Yeah,

Alec Hanson (30:17):

Yeah. I mean it's a gigantic risk for every organization. I mean, let's just be honest. First of all, loan officers have been saying random stuff to everybody forever. It just wasn't recorded on the internet.

(30:27):

So now that it's there for everyone to see, it's a giant issue and you got to walk into this with a ton of intentionality and really slow down. And that's why the mortgage industry is never known for its innovation and it's going to be regulated and slowed down and governed through this process too. And I think that the companies that take the most thoughtful strategic approach with this won't end up in a newspaper somewhere with some big situation to manage with somebody saying the wrong thing, and now it's all over the internet. And so I think it's inevitable. I think some companies are going to step right through it and move too fast, but it's just going to be a really thoughtful job of how to step into it.

Dan Vasquez (31:07):

We've seen gen AI honestly be more of a pro than a con when it comes to compliance because it's actually helping our compliance team, like you said, with what's being said on the phone. Well now we have the ability to, in real time, transcribe tag, search for different policies, search for different internal or external rules that may be being broken at any given time, send out real alerts. You've got eye in the sky on every single conversation at scale at all times. Now with Gen AI, our compliance team loves that.

Steven Cooley (31:37):

Alright, sure.

Taranjeet Kaur (31:39):

So as I understand compliance and gen AI coming together, I think there will be shift in compliance how we view compliance and the processes around mortgage and compliance because the easy stuff, the parent based document reading or any kind of legal clauses that you have to read in compliance that would be taken so easily over by Gen AI.

(32:00):

It Would be the hard stuff, which is like, Hey, were you able to read into that sentiment of the customer

(32:06):

Conveying that? I think that it would be a shift in how compliance or risk would view our interactions as a mortgage company. Absolutely. So I think that would come, and Jay and I will take over those cases

Gemma Currier (32:19):

As an organization. I think that every company should have some sort of AI risk model or way that they're looking at AI and make sure that there's multiple stakeholders in that understanding of the process. So when you think about compliance, you think about data privacy concerns, you think about fair lending and bias. You think about if you were to make decisions off of it, explainability, all these other things. And so I think having a good risk model and understanding what your organization is comfortable with, not comfortable with and putting decisions through that process is an important part of the process.

Steven Cooley (33:02):

One of the great use case for AI is marketing segmentation and personalization and oh, what was the other, where's the word? I lost it. I'll find it. Don't you worry. Personalization. No, I think I said all the stuff. Oh, personas, marketing personas. I like that one a lot too. Has that been something that you've explored and had success with personas in particular I think are an interesting way to leverage this technology?

Alec Hanson (33:36):

Yeah, I mean if we want to really cast a vision of this crazy dystopian future, I think that AI avatars are going to live in every one of your social media feeds and I'm going to keep feeding you the right avatar until you find the one you like the best and then that's going to be the one that delivers all your ads. And I think it's going to turn into a really wild place where then you can call up or direct message and talk to that avatar and get what you need. So yes, it's going to open up wild new areas for all of humanity to manage against. I can't wait for AI to be calling AI to set appointments. It's all going to happen and we're just going to be along for a little bit of the ride. But at the same time there is, I like to look at this through the eyes of a consumer.

(34:20):

We're all a little creeped out when we know Alexa's listening to us and then we get fed an ad the next minute of something we said to our spouse at the same time, some of us, there's a little bit of like, oh yeah, that does solve a problem for me and I didn't really want to deal with it and now my problem solved. And consumers tend to appreciate when things are solved very easily and they didn't have to do a bunch of heavy lifting. I think that's where AI is going to come in and deliver some of that solutioning and marketing's going to get more sophisticated and hopefully be less intrusive and creepy. But we'll see.

Gemma Currier (34:53):

I think kind of moving as fast as consumers are willing to play in the game with you is important. So where you might be creeped out by something today in two years, that might be totally normal. And so understanding the ability and desire for a consumer to actually engage with AI or an AI call bot or a chat bot or whatnot, all those things will probably be very widely used in the future, but trying to see how do consumers actually feel about that and walking that slow walk until we feel comfortable with it.

Dan Vasquez (35:32):

Yeah, that's part of those personas. One of the first personas is, Hey, is this somebody who wants to engage digitally and be self-service or is this somebody who wants to talk to a human yesterday? I mean, that's one of the first opportunities to identify and then you can plan your journey from there.

Taranjeet Kaur (35:50):

So few wins of AI. I think being a personalized communication channel through my emails or through any marketing campaign and even when I'm calling a customer center, I think those are real good wins with AI when you can really have those personalized experience.

(36:06):

And now if you merge it with your data that, Hey, my loan to debt is terrible, recommend me something that consolidate my debt and recommend me a rate that I consolidate my debt, I think those would be great wins for AI.

(36:20):

So that persona feels is actually really promising

Steven Cooley (36:24):

Drives efficiency too, right? Yeah, everybody gets to do a little less and achieve a little bit more. Well, this is the grand finale question. We'll let everybody go, but there's lots of gen AI applications and 30% of folks are going to abandon their stuff, but not us. Let's talk about what is a application gen AI application that we believe is going to create the most impact in the mortgage business in 2025

Dan Vasquez (36:53):

When we talked about multi-agent orchestration, I think it came up on this panel. I know it came up on the last one, but I think the evolution from rag based and document search. I think right now gen AI use cases are very ask question, get answer, and I think we're going to quickly evolve into give command, get results, right? You're going to see the gen AI use case is able to actually take action on behalf of users, on behalf of team members, honestly, in old ways. I think in many cases we already have these APIs. We already have this automation. We just haven't quite orchestrated and hooked it up to that front end.

(37:28):

So you understand that intent with a simple human prompt and then you can actually take the action underneath. I think that's going to unlock a ton of value. Love it.

Taranjeet Kaur (37:36):

So before I answer that question, the thought that I want to leave today with this is gen AI, an iPhone moment or it's a blockchain moment.

(37:51):

If you think it's an iPhone moment because the power is in your hand and you can do a lot with it in your day to day, I think that's when you start to unlock gen AI for your mortgage solutions as well. I think because I'm from a bank and we have 180 years of mortgage documents and legal and compliance, I think with legal and compliance, how we are going to interact with them and how our use cases are going to shift because of the AI application, they're going to read so much, they're going to take all that structured unstructured data of documents of rules and regulation and make the processing so much more faster and streamlined. I think that could be something that this industry really want to look forward to because we deal with so much of it in day-to-day. So it would be a great win for mortgage industry as well as I think I will personally be rooting for it to happen. Yeah,

Gemma Currier (38:51):

I agree with Dan. I think in terms of taking that AI bot and then taking the next action, so not just summary, but okay, now I have this, now what do I want to do with this? So is it all going to be 2025? I don't know, because people have to have the right kind of infrastructure in order to get the summary in the first place to take the next action. But I do think that that's a big thing to look at and see is that really going to change not only the efficiency, but how people use platforms that they use today. So are they going to not go to that screen and instead ask the question and ask for some action to happen? So I think it's going to be an interesting thing to keep our eye on.

Alec Hanson (39:36):

Yeah, I wrote loans for eight years in the glory days when we had to stack paper and have a stacking order and do all that fun stuff. And I think AI will dramatically change the fulfillment process for loans. I think it'll make it unbelievably different than today. I think it'll delight the customer, but kind of on the theme everyone's talking about, I think one of the biggest ROI slash enhancements today will be the customer experience with a fully autonomous AI assistant, so to speak, kind of driving the transaction where you can in real time with voice, get updates and communicate for all parties, loan officer, customer assistant manager, executive, and down to the loan officer level where a customer can actually just have this seamless engagement across their whole mortgage experience with transparency. And that for all of us as just consumers is thinking it's something we all want today. You have to call somebody to find out where your loan is or you have to log in and look at something. And the ability to be like, where is it or have it proactively tell you is going to change the game for all of us.

Steven Cooley (40:41):

Excellent. Well, I'd like to thank our panelists and I appreciate everyone's attendance. Thank you so much.