Revealing Categories of Emotions in AI
AI can read our emotions from our faces and knows what we are thinking and feeling. Or can it? One of the areas where AI has been particularly active has been in ‘Emotion AI’ — which typically means using video, audio, or text written by a human, and using it to infer their emotional state. Many companies have built APIs that do this (e.g.: Microsoft: for images and text, IBM: for text, as well as specialist companies like Affectiva and Heartbeat AI). This can be used for sentiment analysis at the most basic end, but at the extremes there are services that will screen job applicants for their emotional states.
All of these have one thing in common: they take human data, and classify it, saying, for example, “in this video, the driver is distracted”, or “in this email, the writer is unhappy”.
But what if, AI’s emotional emperor has no clothes? What if, as Clore & Ortony (2013) suggest, despite the fact that people can (when asked) consistently classify caricatures of emotional expressions, often our emotions don’t even involve facial expressions?
There are, in effect, two approaches to emotions:
- Emotions represent mental states, and are easily and consistently classifiable (especially by humans) — this is the dominant approach in AI.
- Emotions are more about our reactions to situations than our bodily responses, and emotional concepts are actually stereotypes that we risk confusing with reality.
In philosophy, there is a concept called reification: this is the process of making an immaterial thing, like happiness or fear, into a material thing. This is the risk of AI classification: by making the category more concrete, more public, and more visible, we can come to mistake the stereotype for reality.
This is not an argument against using AI to investigate and even help people understand emotions. At Turalt, we do that too, but we don’t classify emotions. We focus on reactions to situations, which are more useful. What does anger even mean in general? What matters is its effect on how we react, its context, and the outcomes that flow from that.
AI approaches to emotions are essentially behaviourist. For John B. Watson, the stereotype of behaviorists, emotions were mere physical responses. And for AI too, it is only behaviour that matters — after all, what else is there to measure? B. F. Skinner would have loved AI, with its focus on observation, learning through reinforcement, and reduced interest in theories and hypotheses.
At Turalt, we differ: we use psychometrics and other tools to model how people conceptualize situations. Every one of us is different inside, and these differences have value. Our techniques build a model of how each of us might react differently to the same situation, so we can improve our emotional intelligence by practising the skills we need to handle situations better.
Let’s all be a little more critical of what AI classification does, and the possible impacts of reification on the categories that we are making. And above all, let’s be mindful that AI’s neo-behaviourist tendencies deserve caution and counterbalancing with a little more attention to psychological methods. Let’s ensure that the emotional emperor robot is fully clothed.