Four agencies responsible for enforcing consumer protection and anti-discriminatory laws issued a statement Tuesday warning that automated systems have the potential to perpetuate bias, and that companies offering such products can be on the hook for wrongdoing.
In an interagency statement, the Consumer Financial Protection Bureau, the Civil Rights Division of the Department of Justice, Federal Trade Commission and the Equal Employment Opportunity Commission, reaffirmed their commitment to ensuring that automated systems, impacting housing, employment and credit opportunities are developed and used "in a manner consistent with federal laws."
During a press briefing, Rohit Chopra, director of the CFPB, said the joint statement is being issued to "make one point clear that there is not an exception in our nation's civil rights laws for new technologies and artificial intelligence that engages in unlawful discrimination."
"Companies must take responsibility for the use of these tools," he said. "Unchecked AI poses threats to
There was no mention of whether any recent enforcement actions have been taken against vendors or companies offering or using automated systems.
The agencies expressed concern that automated systems, which rely on vast amounts of data to make recommendations or decisions, have the potential to produce biased outcomes.
Specifically, the agencies outlined that non-representative datasets used by automated systems may perpetuate discrimination, and that there is a lack of transparency around how these "black boxes" function. This, the agencies say, "makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair."
"We have come together to make clear that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws," said Charlotte A. Burrows, Chair of the EEOC, in a written statement."We will continue to raise awareness on this topic; to help educate employers, vendors, and workers; and where necessary, to use our enforcement authorities to ensure AI does not become a high-tech pathway to discrimination."
It is not the first time that these agencies, including the CFPB, have raised the alarm around potential ingrained bias in AI systems.
The CFPB has
"Using a complex algorithm is not a defense against providing accurate explanations," Chopra said Tuesday.
The bureau also plans to roll out a rule "to make sure artificial intelligence and automated valuation models in residential real estate have basic safeguards when it comes to discrimination," Chopra noted.