The law center noted that there isn’t even agreement on the definition of “artificial intelligence,” which adds to the concerns about how it is used.
“The lack of a definition for AI is understandable, but it is also problematic,” the group wrote. “There may be incorrect assumptions that the use of AI necessarily makes a system more accurate or predictive, or that it is unbiased and unquestionably fair.”
The pro-consumer legal organization said public perception of what constitutes AI has been heavily influenced by Hollywood with movies such as “2001: A Space Odyssey” or the Terminator series. “Many think of AI as incredibly human-like and sentient, which is very far from current reality,” it said.
State Street Corp., one of the largest banks in the U.S., with nearly $317 billion in assets — told regulators that in its experience, AI and machine learning models may face data quality challenges, including bias introduced by mislabeled data or embedded in data provided by a third-party vendor.
‘Hard issue to regulate’
Jo Ann Barefoot, a former deputy comptroller of the currency and Senate Banking Committee staff member who now leads the Alliance for Innovative Regulation in Washington, said there are numerous possible benefits to the use of AI in credit underwriting. But regulators need to ensure that banks comply with fair lending laws and that machine learning doesn’t lead to denials of credit based on prohibited reasons such as race and gender, she said.
Credit: Google News