Loan Think

Want to see what's coming next in AI regulation? Look to Europe

If what's past is prologue, then the U.S. mortgage industry should be looking to the European Union to see what very well might be coming next in terms of AI regulation. In April, the European Parliament is expected to adopt The European AI Act: a new regulation that could very well become the framework the rest of the world will rely on as they develop their own AI regulations.

Certainly, that was the case in 2018, when the EU adopted the Global Data Protection Regulation (GDPR), which has since become the model that several U.S. states have used to develop their privacy regulations. The European AI Act will apply broadly to any AI system developed or used in the EU. As is the case with GDPR, U.S. companies will come under this regulation if they do business in the EU. 

The new regulation generally categorizes AI risk into four broad levels: 

  • Unacceptable Risk - Completely prohibited
    • Examples: Social scoring by governments, systems that manipulate behavior
  • High Risk - Permitted subject to strict oversight
    • Examples: HR Recruiting, Credit Scoring, Underwriting
  • Limited Risk - Permitted with specific transparency requirements
    • Chatbots, AI generated content
  • Minimal Risk- No restrictions
    • Spam filters

The most detailed framework is around high-risk AI systems, which covers use cases like underwriting and credit scoring, that would potentially be of high value to lenders. Specifically, the rules emphasize avoiding bias by ensuring the quality of the data sets used to train the algorithms; being able to trace and explain the reasons behind AI decisions; and finally assuring a high level of accuracy.

AI users would also be required to demonstrate that that are providing clear and adequate information to consumers and that they are taking prudent steps to ensure privacy and security.

Limited-risk systems include AI chatbots, which many lenders are exploring to provide timely, high-quality customer service. Requirements for limited-risk systems focus primarily on disclosure, ensuring that consumers are aware that they are interacting with AI and have the ability to opt out and communicate with a person. Lenders will want to ensure that they are following EU requirements if their chatbots are accessible within the EU.  

Following the EU's lead?
In the five years following the passage of GDPR, a number of U.S. states— including California, Vermont, Massachusetts and Colorado— have incorporated elements of those privacy standards into their regulations. Similarly, large U.S. companies with global footprints, including many in the financial services space, have developed their privacy practices to broadly comply with GDPR. Many observers expect this will be the case with AI as well.

Currently, U.S. regulators, at both the federal and state levels, have been relying on existing laws, such as the Consumer Financial Protection Act with its UDAAP provisions, to regulate the use of AI in financial services. Often these regulators are focusing on the same concern addressed in the AI Act, but in a more piecemeal fashion.

For example, at the local level, New York City's Automated Employment Decision Tool Law, requires an annual audit of AI tools to root out bias. It also requires disclosure to applicants that AI or machine learning will be used to evaluate them. Privacy and security protections are mandated by the law as well.

Similarly, Colorado adopted the Algorithm and Predictive Model Governance Regulation in 2023 to ensure life insurers' use of AI models does not result in unfairly discriminatory insurance practices with respect to race.

At the federal level, various agencies have already issued stern warnings that lenders using AI should take care not to violate consumer protection laws in the process. In April, the CFPB and its federal partners publicly pledged that the use of automated systems and advanced technology would not be accepted as an excuse for lawbreaking behavior or discriminatory outcomes that threaten consumers' financial stability.

This past June, several federal regulators, including the CFPB, jointly proposed a rule to ensure home valuations that use AI technology are fair and nondiscriminatory. The proposed rule specifically focused on the risks posed by algorithmic appraisals, including the potential for computer models where bias is baked into the equation. 

Adding to its 2022 guidance on the use of complex algorithms in credit decisions, the CFPB published a circular in September 2023 on adverse action notices and credit denials when AI is used. 

Finally, this past Fall, the Biden Administration issued a wide-ranging Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.  

In all, the order requires more than 100 specific actions from over 50 federal entities. Although broader in scope than the EU's AI Act, the executive order targeted the same core issue sets addressed in the proposed EU regulation: AI bias, consumer protection, privacy and security and permissible government use of AI. It also specifically pointed to international leadership on these issues. So, if the financial services industry is serious about preparing for new AI regulation, a careful reading of the new AI Act might just be the place to start.

For reprint and licensing requests for this article, click here.
MORE FROM NATIONAL MORTGAGE NEWS