Banks love to brag about how many data scientists they’re hiring and their shiny machine-learning “centers of excellence.” In the 2018 JP Morgan Chase annual report, CEO Jamie Dimon said the company had gone “all in” on artificial intelligence, adding that artificial intelligence and machine learning were “being deployed across virtually everything we do.” Not to be outdone, HSBC has opened multiple “data and innovation labs” around the world, in order to build artificial intelligence tools that can take in the bank’s more than 10 petabytes of data. Citigroup, Bank of America, and Capital One also boast about their artificial intelligence capabilities, particularly to their would-be investors.
Of course, some of this is hype: Banks believe they can get a certain brand patina from looking and acting like tech companies. But as Oxford technologist Nick Bostrom points out, artificial intelligence technology has the potential go from staggeringly dumb to effectively omniscient rather quickly. The fact that your bank’s chatbot seems pretty lame is no reason to write off what Wall Street is capable of. A report published this month by the Bank of England finds that two-thirds of U.K. banks already use machine learning or artificial intelligence to run their business. The report is one of first instances of a regulatory agency systematically documenting the widespread adoption of machine learning and artificial intelligence in banking. (U.S. regulators haven’t published similar findings, but it’s clearly happening on a comparable scale here.)
“Machine learning” and “artificial intelligence” aren’t precise terms—the BOE describes artificial intelligence broadly as the development of “computer systems able to perform tasks that previously required human intelligence,” and considers machine learning to be a subcategory of A.I. that recognizes patterns in data. So when banks highlight their machine-learning prowess, they’re sometimes puffing up their bellies about statistical methods that have been used broadly for decades. It can be fun to make fun of stodgy banker-types for donning hoodies, embracing “disruption,” and trying to outbid Google and Facebook to hire artificial intelligence experts, but the real risk to society isn’t that banks fail at A.I. It’s that they succeed—and customers lose.
As the BOE explains, “machine learning” isn’t always substantially different from the statistical models banks have used for decades, like the credit scores they develop to predict the likelihood a customer will default on a loan, or the models bank use to predict whether a particular debit or credit card transaction was fraudulent. But that doesn’t mean this is just business as usual. Banks are making real advances, substantially growing the size of the information asymmetry and gap in bargaining power between banks and consumers. The tricky thing about incremental “innovation” is that society runs the risk of iterating and iterating on our slingshots until we’re left holding machine guns. The world’s largest banks have been consistently disappointing stewards of public trust. As they become experts at predicting, and in turn, influencing human behavior, you can bet your bottom dollar they’ll apply that expertise in unsavory ways. After all, the fact that both bank profits and credit card interest rates are at record highs is a telling hint that the benefits of machine learning in finance aren’t translating into lower prices for consumers.
Consumers should worry about the increasing sophistication of bank’s machine-learning strategies because there are close links between predicting and manipulating human behavior. For example, if I’m a bank that wants to influence you to take a higher-priced loan than you’d otherwise qualify for, I should start by predicting how you would respond to different marketing strategies and combinations of product terms, pitched to you at different times of day, across different channels. As legal scholar Spiros Simitis wrote in 1987, “Information processing is developing [into] long-term strategies of manipulation intended to mold and adjust individual conduct.” And as Shoshana Zuboff points out in The Age of Surveillance Capitalism: The Fight for Human Future at the New Frontier of Power, the ultimate purpose of machine learning and artificial intelligence by corporations is often to induce “behavior modification at scale,” ideally in ways that are subtle enough that they happen “outside of our awareness, let alone our consent.”
Consumers should worry about the increasing sophistication of bank’s machine-learning strategies.
Banks are particularly incentivized to go whole-hog on machine learning because of all the ways financial products vary from most of the other things we “buy.” Consumers rarely pay for bank products like checking accounts or credit cards with one lump sum upfront, the way we buy ice cream or shoes. When a customer opens a new financial product, the bank is typically in the hole from paying marketing or setup costs and has to dig its way out by nudging the customer toward whatever behaviors trigger the fees or interest income. Moreover, financial products are, more often than not, tremendously complicated. For example, the Consumer Financial Protection Bureau has found that the typical credit card in the United States has more than 20 distinct “price points”—separate fees or interest rates contingent on specific ways you might use the card—a level of complexity that can raises lots of opportunities for “gotcha” moments induced via behavior modification. Finally, our financial lives generate a staggering amount of valuable data, providing banks with plenty of ammunition. Consider how many times per day you open your bank’s mobile app, or use your credit or debit card, not to mention all the data you’re handing over if you grant the app the right to see your phone’s location data and the data it can grab from cookies as you move across the web.
Between advances in machine-learning technology and the limits of existing law, banks can toggle each customer’s particular payment options, based on all that individualized data, to increase the likelihood that the customer will miss a payment and get hit with a late fee, while calibrating that it happens just infrequently enough that customer won’t get fed up and close their account. Banks may also tinker with the exact product terms they market to each potential prospect, suggesting the crummiest deals to prospects deemed least likely to shop around, and only offering competitive rates to the people they’ve predicted can discern between a better-than-average offer and a worse-than-average offer.
Coming out of their report, the BOE and U.K.’s Financial Conduct Authority have announced they will “establish a public-private working group on A.I. to further the discussion on ML innovation.” That’s better than what we’re seeing in the U.S., but an exploratory conversation sounds unnecessarily tepid. Regulators should demand proof that bank’s A.I. teams are improving financial services for consumers, rather than just “learning” the best ways to get us to part with our paychecks.
is a partnership of
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.
Credit: Google News