A cognitive assessment of the belief that AI and robots will destroy us all
Since I joined the AI field back 25 years ago as an undergraduate student, I have never witnessed such a pervasive, morbid “AI catastrophism”, with scores of articles, TV programs, tweets, retweets, and re-re-tweets on how we are all going to succumb to AI.
I think this deserves a pause, and an analysis of what is actually going on… not so much with AI, but with our own psyche.
Considering that all the AI I have personally built in my careers as a scientist and entrepreneur, as well as the overwhelming majority of the AI I have come across built by other professionals working in the field, is benign and oriented to solve practical, useful issues in our work or homes, I wonder where all this comes from.
Interestingly, on my personal journey to AI, my starting point was not… AI proper, but Psychology. Back in the small Italian coastal town of Monfalcone, near Venice, where I was born and raised, there was no ‘official’ AI program. The closest you could get to AI was Psychology. And I did join the Cognitive Psychology program, where I rapidly moved through Neuroscience all the way to Computational Neuroscience and AI, passing from traditional AI to the more exotic brand, Neural Network modeling (today, re-branded Deep Learning).
After 20 years in the field as graduate student and professor, I feel the need of 20 additional ones to barely scratch the surface of what we still do not know about the brain.
Before getting there, though, I learned a good deal about Psychodynamics, Clinical Psychology, and associated themes… So, at a certain point, reading so many of these alarming posts about AI-fueled Apocalypses — which we re-baptize AI-pocalypse for brevity — a light-bulb went on.
Where have I seen it before?
We are going to take a mental journey through Psychology, Neuroscience, back to AI, to shed some light on what may be going on in the brains of the AI-pocalyptics. For many, will be a déjà vu.
Is there a pattern of personalities related to AI-pocalypses? Not everybody shares the same outlook on AI. To the contrary: many believe that AI, as well other technologies, will be pivotal in propelling our society to the next, ‘better’ level. E.g., listen to this talk from one of my investors, Tim Draper. You get out of this talk feeling that without AI and new techs, we will be worst off.
Tim, as many other fellow investors and entrepreneur, is a positive thinker. He believes in human creativity, goodness, and human-propelled progress. In essence, he’s a good man which spends the majority of his time hanging out with innovators thinking how we can change the world for the better.
Not everybody shares these traits, though. A fellow new-Englander Professor, Maurice Farber, published in the remote 1951 a study titled “The Armageddon Complex”. The journal article, published in the midst of the Cold War, was focused on the most imminent threat then present: the chance of a total nuclear war between the US and Russia. Farber’s 312 sample showed a strong trend: there was a positive correlation between one’s future outlook, and the disposition to believe in nuclear Armageddon.
Namely, the study suggested that the crappier your life expectation, the darker you see your future, the more likely you are to believe manure is going to hit some fan soon.
So, a probable 1st trait could be depressive tendencies. But that would be too of a simplistic explanation.
Additional light on AI-pocalypses is shed by a study from Sharps, Herrera, and Liao, investigating this time the (21st December) 2012 Maya prophecy.
Disclaimer: my 40th birthday was falling exactly on that date, and, since I was working on AI, some people — including my sister-in-law, who briefly thought I was the Antichrist because of my work with DARPA, AI, Robotics, and the like — believed I had some sort of role in this… Thankfully, they were distracted by the 2012 movie and left me alone. ON December 22nd, all cool again (I think…).
So… the study by Sharps et al. collected data from 110 students, rating their belief that 12.21.2012 would be associated with major World events. What they found was interesting, namely, a higher than normal level of ‘dissociation’, which includes the feeling of anomalous perception of experiences, time, and reality in general, often accompanied by diminished critical assessment of reality.
In a sense, we all at a certain point of our life feel this way. By no means being a pervasive mental illness, it is though clear that there is an interaction lingering between propensity for apocalyptic thoughts and psychological conditions (e.g., depression, disconnection from reality due to various factors — e.g., job loss, traumas, and the like) and, somehow, mental attitude towards reality and how to best address the future — being able to positively change and affect it with one’s own action vs. being a passive observer in the enfolding of events.
The last point is, I believe, crucial. The ability we have to sense danger and ‘get out of here’ when we perceive that a situation has non-zero chance to turn out deadly is most likely the single crucial skill our brain has developed alongside our ability to find eatable food and reproduce.
However, a parallel skill has developed that has enabled our species not to turn into a bunch of paranoid mole constantly imagining Armageddon(s): our ability to positively change the future.
And this is key with AI, or any other tech. As they are built by humans, they reflect exactly our intentions while we built it.
It is on us as a species to use it right, with hearts connected to our brains at every steps of the process.
It can be done. What do you think?