The difference between AI and humans is how they express theirs.
To understand the statement, we must be on the same page regarding what are human emotions. I like the simple explanation from The American Museum of Natural History:
“Your brain gets information from two different sources: Your senses tell you what’s going on in the outside world, while your emotions exist inside your body to tell you what these events and circumstances mean to you. Just as hunger motivates you to find food, emotions motivate you to take care of other needs — like safety and companionship — that ultimately promote survival and reproduction.” Here is the link to the full document.
There is nothing magical about human emotions. What we feel is caused by a series of neurotransmitters and other chemicals produced by the brain which activate different parts of the body to make us act and think in a certain way. Most emotions are useful in survival situations (fear, anger, disgust, sadness…). Others, like joy, drive us toward certain objectives, pleasures and so on. Emotions are what moves us. That is their purpose.
Our senses and our memories inform the brain how we should feel at one moment or the next and when we feel an emotion, we act upon it. In an article I wrote in “The Startup”, I describe how we everything we do is to either feel joy or avoid discomfort.
That’s our most basic programming in its simplest definition.
Artificial intelligence doesn’t have a brain like we do. In fact, most A.I. don’t have a body. A.I. is software running on a computer somewhere. Occasionally, A.I. runs on a computer connected to actuators of some sort, commonly known as a “robot”. Formless A.I. are often unnoticed and operate behind the scenes like the artificial intelligence behind Google Search, Amazon store selections and sophisticated data analysis programs.
Not all A.I. are equal either. For this article, I will focus only on artificial intelligence that can learn, i.e. deep learning A.I. Just because a software is smart and marketed as intelligent, doesn’t mean it has emotions. Some are very crude and unidimensional, while others, like those using generalized adversarial networks (GAN) and other methods to make A.I. “smart” are quite sophisticated.
No matter the complexity of artificial intelligence, which one’s have emotions?
Emotions, just like with humans and other animals, drive action. When A.I. is given a stimulus through its senses (cameras, LADAR, microphones and other sensors), it checks its memory (database of information) and then runs the appropriate program/algorithm that corresponds to what it has sensed and remembered. The activated program then produces a predictable result. With deep learning A.I., its actions sometimes surprise because it continually learns on its own from its experiences and the environment, constantly adding information to its memory and adjusting patterns of behavior. Hence, just like a human being, its memory changes, therefore the output of a triggered program changes. The program remains the same. It’s the data that changes.
It sounds very similar to a human learning and adapting its behavior doesn’t it? That’s because it’s how we learn too. Our “program” doesn’t change, i.e. how our brain is wired, but our mind changes as we gain experience and memories.
Contrary to a human engineers create the programs that provide the results we want. Good A.I. programs will converge towards desirable results no matter what it gathers as new information in its memory. And following the human model, A.I. programs converge following a set of rules that grades the software depending on how it does in the world. When the A.I. does what it should be doing, it counts it as a win, when it fails, it tries to avoid that path in the future. That’s the notion of convergence towards desired behavior.
This is also exactly how humans converge towards desired behavior. Our emotions drive us to seek joy and avoid discomfort. Like the deep learning A.I., this emotional dynamic helps us converge towards desirable outcomes.
We may not call A.I. software programs that drive action emotions, but the process is the same.
Fearing the Emotional Robot
We fear robots that can learn and improve because we assume they will behave like humans would. Thankfully, for now, engineers have been unsuccessful at creating an A.I. capable of emulating the full range of human emotions. One day, such a thing will exist, if nothing else but to satisfy our curiosity.
Humans feel joy or discomfort for plenty of reasons. Sometimes, a joy or discomfort felt by one person will put others in harm’s way. These are the moments we loathe and fear in our society. Unfortunately, controlling the impulses of everyone is impossible. That’s why we have laws, security and a penal system.
The good news is, programming A.I. with humanity-safe objectives and methods is entirely under our control. A.I. researchers have been working hard to figure out how to integrate goals into A.I. to make them safe and perceived optimistically by our society. Emotions and goals are important for deep learning A.I. because that’s how the software can learn through challenges programmers cannot expect.
We don’t want our self-driven cars to have strict behavioral software. What if it must make a choice, with our lives in their hands? We want them to decide on their own, improve and become better tools. This requires a program that can learn and adapt with clear objectives of keeping everyone safe and quickly getting us to our destination.
There is no need to program emotions like anger into self-driven cars because then we’d give them the motivation to fight to save their own lives (or ours). An “angry” self-driving car would hurt others to save itself and its occupants. It is not a desirable behavior. We want the cars to emulate emotions like fear. Fear would help protect itself and its occupants without aggressive movements and to drive away from danger instead of confronting it. The car feels nothing. It is just taking direction as programmed based on what it “sees” using its abundance of sensors and the data in its memory. That’s the self-driving car deciding based on its programming and experience. That’s its emotion.
Since most human emotions don’t lead to harm towards others, having A.I. emulate those emotions for practical reasons, isn’t an issue.
However, I object in giving A.I. emotions that may lead it to fight (like anger). The only A.I. application I can think of where simulating human anger can be useful is in robotic A.I. soldiers and weapons. Anger could help them destroy opposition more effectively. However, if these killer-A.I. learn from us, like any other A.I., I fear many innocents could get hurt. Thousands of high tech CEOs and organizations have signed a petition to the U.N. to ban the use of A.I. in autonomous weapons, for good reason.
We must strive to program our artificially intelligent tools so they may help us best, but we need to be careful what types of emotions we want them to feel and what goals are we giving them.
One day soon, we may live in a world with interactive A.I. that behave like animals or even humans, but with different emotional triggers and goals. If we do it right, none of those will have emotions that can harm anyone. In the face of danger, they may flee, defend others, drive away, or let their robotic bodies get damaged. They would do so not because they are courageous, but because they would feel their version of “joy” when doing so.
That’s when we’ll need to think about just how remarkable we are. Perhaps when we coexist with sophisticated A.I., we’ll even need to redefine what it means to be sentient in this universe.