If you have people making decisions from data, you probably need
The sensational headlines this week are about generative chatbots –
Chatbots do have their uses. You might want to have a web page that takes customers' questions in plain English and answers them. Generative technology can be useful on the input side, for recognizing different ways of wording a question, but the answers have to be controlled. When a customer asks for his loan balance, the chatbot must actually look up the balance, not just make up something that uses words in a plausible way. Even if the computer misunderstands the question, it must not spout falsehoods.
But chatbots are just one tiny part of AI. They are
Machine learning means getting software to recognize patterns and train itself from data. Machine learning is very useful for finding statistical regularities and estimating probabilities. It is basically statistical regression, greatly expanded into many dimensions. Neural networks are one kind of machine learning, and they are multi-layer statistical models, not models of brains.
The results of machine learning are only probable, not certain. You have to be ready to live with inaccuracy. Fortunately, people recognize that the answers aren't coming from a conscious human mind, and it's easier for humans to be cautious. Machine learning will tell you whether a borrower is probably a good risk. It will not tell you for certain exactly what that borrower will do. That is easy to understand, and useful.
Apart from inaccuracy, the big risk with machine learning is that it will learn the wrong things – specifically,
How strongly you guard against this depends on what you are using machine learning for. If you're just plotting an advertising strategy or making predictions internally, the prejudiced computer may not violate laws or regulations – but if it's making decisions about people, it certainly will. The cure is to block inappropriate information from being used, so the machine is only learning from data you're entitled to use, and also to test the results to see if the system is in fact biased. You usually cannot look at the machine learning system to find out what it learned, because the patterns are hidden in matrices of numbers.
But even that isn't all of AI. Traditionally, AI comprises all uses of computers that are based on the study of human thought. That includes some technologies that are not in today's limelight but are very applicable to finance. They revolve around knowledge-based systems and explicit rules for reasoning.
One time-honored method is knowledge engineering: Get a human expert, such as a loan underwriter, to work through a lot of examples and tell you how to analyze them. Then write a computer program that does the same thing, and refine it, with help both from the human expert and from statistical tests. The result is likely to be a rule-based, knowledge-based system, using well-established techniques to reason from explicit knowledge. And it can well be more accurate and reliable than the human expert because it never forgets anything. On the other hand, unlike the human expert, it knows nothing that was not built into it.
Knowledge engineering mixes well with machine learning approaches that output understandable rules, such as decision trees. There are also ways to probe a machine learning system to extract explicit knowledge from it; this is called explainable AI (XAI).
Of course, knowledge-based systems face a pitfall of their own that we recognized long ago: "As soon as it works reliably, it's no longer called AI!" But we're in business to make good decisions, not to impress people with magic.