Is artificial intelligence a kind of wise adult, the kind you might turn to, say, for guidance about whom you should marry? “It’s not so hard to see,” the historian Yuval Noah Harari has argued, “how AI could one day make better decisions than we do about careers, and perhaps even about relationships. But once we begin to count on AI to decide what to study, where to work, and whom to date or even marry,” he added, “our conception of life will need to change.”
Rather than change our conception of life, though, what we might need to change, and urgently, is our perception of AI.
It’s actually quite hard to see how AI might make “better” decisions than we do about whom we might marry, given that many of us have very different notions of what “better” would be. AI works well in contexts with easily quantifiable objectives; in areas where social norms are shifting, however, where there is no clear societal consensus on the right thing to do, handing over the decision-making to AI tools might actually hamper us.
Artificial intelligence, and its machine learning subset, are powerful but limited tools. We need to understand their limitations (as much as their abilities) if we want those tools to be helpful to humanity.
First, current AI models work well on specific tasks; their effectiveness is generally not transferable to different tasks. Sometimes, the effectiveness doesn’t transfer even to the same task: A predictive model that works really well on one data set, for example, might not be nearly as accurate in another. Researchers talk about models being “brittle”—breaking down easily.
Second, human decision-making, with its associated human biases, is baked into multiple layers of AI technology. Humans decide what data to collect in the first place, and what data to leave out. Humans decide how to categorize and label that data. Humans decide on the objectives of AI and the criteria on which to evaluate AI. Subjectivity reflected in data or the AI development process does not disappear simply because the final algorithm embodies a mathematical form.
Third, what AI tools are very good at is identifying patterns in vast data sets. They do that much more thoroughly, faster, and at greater scale than human brains can. But they simply identify correlations rather than causation. AI cannot tell the difference between a stereotype and a valid inference. Human expertise is required to separate noise from valuable insights.
Moreover, predictive algorithms are not oracles telling us truth about the future; they tell us how likely it is that something will occur, based on the times when it occurred before. And that likelihood comes within a range, due to the inherent uncertainty in statistical models.
Understanding the limitations of AI tools should help us understand where we can usefully deploy them and where we should not. It should help us realize the ways in which AI is not like electricity, or other similar forces that it’s been compared to. Yes, it is powerful; yes, it operates in many different facets of our lives; however, unlike electricity, it contains and perpetuates our human flaws (and our past flaws, at that; it doesn’t necessarily keep up with our current ones).
Machine learning is not a wise adult. It is a smart child who can process vast amounts of information, but who believes everything you tell it. Like a child, AI is very literal and easily misled by data that is biased, not representative, or otherwise flawed. We should respect and deploy wisely its abilities, without bowing down to imaginary powers.
Alice Xiang is a research scientist at the Partnership on AI. Irina Raicu is the director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.
Credit: Google News