One of the key areas that needs to be deeply explored from a Human/AI and anthropological perspective is trust. There are a lot of unknowns and misinformation about AI, which has contributed to the fear that you see and hear in the news. This is further fuelled by the way machines and AI are portrayed in movies like Terminator, iRobot, and Transcendence.
As such, in order to build trust, we need to address the underlaying concerns that people have about AI.
Last month, I gave a talk on Human AI Interactions to a group of Canadian regulators, lawyers, and entrepreneurs in Montréal. They wanted to better understand the emerging trends and concerns that people have with implementation of AI.
The four themes listed below are what were explored in the talk.
From a broad day to day perspective, this concern is at the forefront of most discussions. At an individual level, the question that people continually ask themselves is: how much information am I willing to sacrifice for convenience?
Online this relates to every site you visit… in the physical world it relates to stores like Amazon Go, services like Smile to Pay, and the use of behavioural tracking in retail spaces.
From what I’ve heard in interacting with users, the discomfort that people feel is with the type of data that is being gathered and control over whether or not this data is sold to third parties [source].
I’ve heard that people are somewhat comfortable with sharing non-corporeal information (credit card numbers, paths they take in a store, and what products they purchase). But, there is less comfort when information about physical characteristics are shared: eye tracking, retinal scanning, facial recognition, and emotion tracking.
The documentary short “stealing ur feelings” uses humour to show the science and implications behind facial emotion recognition.
From a governance perspective, the laws are either grey or non-existent as you can see in my post about children & privacy. This leads to the question: how do we responsibly collect data so we don’t violate privacy?
The ethical use of AI is about ensuring that AI is being used in ways that are transparent, responsible, fair, and equitable.
The concern over ethical AI has been brought to the forefront by the emergence of technologies like deepfakes, object and people tracking systems, automated decision making, and a concern that we will become dependent on systems that don’t represent entire groups of people.
In addition, a very real and very big concern that people have is about the use of AI in military, defense, and border crossing systems. On Oct 31, 2019, the US Department of Defense released a draft of their Recommendations on the Ethical Use of Artificial Intelligence (pdf). It’s a first step towards acknowledging the need for ethical use in militaristic systems, but at the same time discouraging because the perspective taken is very American centric and assumes moral and ethical leadership.
AI should have global ownership; so, how do we build a democratic, ethical AI that includes global diversity?
Bias is a concern that spins-off from the ethical use of AI.
Bias came to the forefront with the New York Times analysis of facial recognition software used by companies like Google and Amazon and how it mis-identifies older women and women of colour [source].
But, I think the scope of this is much greater. As an anthropologist, I look deeply at how various cultures view technology. From the willingness for Japanese elderly to embrace robot care assistants because of tsukumogami to how indigenous cultures see spirit and humanity’s position in Earth’s ecosystem.
AI should be designed for everyone; so, how do we develop for the removal of culture specific biases?
People are concerned about the future of their job, specifically in fields that are traditionally white collar like legal, accounting, and medical.
Long term, because of growth in use of AI, the future of work will involve a lot of knowledge and thought work; machines will do the heavy lifting of monotonous tasks and number crunching, while humans will add depth, colour, context, and humanity.
This changes the worker paradigm from doers to creators/advisors and problem solvers where people have skills in creative thinking, critical problem solving, and curiosity. This also requires the upgrade and proper funding of education programs that are weakened by continual cost cutting.
The risk of automation further threatens vulnerable positions and people who already don’t make a living wage. How will countries fairly distribute wealth and resources to protect the health and welfare of their people?
Many of the trending concerns listed above are surfacing because we are rapidly foraging ahead with AI without building accountability into technical decisions. Because of this, we are changing the human-machine dynamic and are struggling to find a balance that maps to our moral, legal, and cultural frameworks.
How do we build trust in a world where success is tied to secrecy?
Building trust in AI challenges our ability to be open, transparent, to predict outcomes, to ask the right questions, and understand the problems that we’re trying to solve with technology.
Is the fear justified?
Yes… but not because of a dystopian perception of machines. AI is not mature enough to make decisions on its own; it’s humans that program machines to make decisions. As such, responsible AI should be rooted in humanity and needs people to be both transparent and accountable for technical decisions.
Credit: BecomingHuman By: Sharlene McKinnon