Can a human build human complexity in robots?
“If you control the code, you control the world. This is the future that awaits us.”
It’s the question that drives us.
Humans have been evolving for centuries, and selective evolution is key to human survival. We learn from our bodies, our senses, and the consequences of our actions. The most beautiful thing that makes us who we are is our ability to have compassion. But what makes a being sentient? Is it the ability to feel pain, love, or is to reason and think? Is it an inherent quality or can it be acquired knowledge? Artificial and neuroscience experts believe that the complexity of our hormones, signals connecting our neurons along with interactions in the real world builds consciousness.
We are constantly living in the age of artificial intelligence. From chatbots to social media, the internet creates ideal human happiness, wants, and desires that we fail to receive in the real world. To understand AI, we need to know how we got to where we are today. From the first Dartmouth conference that theorized the idea of AI, our knowledge has expanded a lot. IBM 7090, the most powerful computer (200,000 operations) was enough to help with the US air force ballistic system. Now, an iPhone alone can do 600 billion operations per second. A modern supercomputer? 30 quadrillion operations per second.
A program that responds automatically to an input even with this limited definition, it plays more vital roles. Everytime we click a link or ask for recommendations, that generates data which ends up in a bigger system. The AI revolution is now because we have the data and the computing power to make actual sense of it. Each one of us is writing the future every single day.
Now onto the interesting part: What if we could develop machines that had consciousness built inside them. We know that fundamental human qualities such as empathy, altruism, and compassion can be mirrored and acquired. This is because billions of these data-gathering events happen as we speak, walk, eat and think occur simultaneously. There has always been an argument that AI cannot adapt to human compassion. This begs the question, “What if human compassion was acquired knowledge?” One such psychological study conducted by Harry Harlow in the United States reveals the importance of maternal contact. He took two cases, one where monkeys were separated from their mothers and another where they were raised by two inanimate surrogate mothers. The first was a simple construction of metal wires while the other had a soft cloth. Harlow observed how infants spent more time with the soft inanimate object. This interaction with warmth and heat has also revealed the behaviour of baby monkeys when faced with new and scary situations.
His groundbreaking study revealed the importance of human touch. This raises two important questions: If babies chose comfort over nourishment are babies equally comforted by a robot that provides warmth and comfort to the baby? If they do, can there be robot caretakers?
“Humans suffer from social isolation but react positively to physical contact. This has to do with the Oxytocin hormone released in our body, which reduces our stress levels”
If we build upon the idea of human touch and sensations of warmth and love, we can possibly have these same sensations of warmth released by AI that activates parts of the same brain that activates when with a loved one. If we had simulation bands that make the other person feel the individual’s heartbeat or sensation, we could have AI that emulates the feeling one gets when having a loved one next to them. Perhaps, holding a robot’s hand may trigger similar neural simulation.
Google is already ahead of the game. I recently read an article on how Google is teaching an AI to create more AI: autoML. AutoML is about building neural networks without human intervention. This is possibly the breaking point of consciousness. Imagine by now if we have the complexity of the hormones generated by simulation bands, an automated machine learning system, and exposure to the real world. If a human could possibly build a human mind, then why can’t AI build another AI mind, assuming it accumulated the criteria to become conscious like us.
However, it could also play as a malfunction. In my previous article on epigenetics, I touch on the idea of turning off and on certain genes. It is thus possible that overtime, our genes adapt to this new wave of technology to completely dismiss reality.
“I fear one day that technology will surpass our human interaction. The world will have a generation of idiots.”
People always fear the thought of AI losing control and having what I call “An Intelligence Explosion”. We hate the idea of something more powerful treating us with disregard. Yes, they could exceed our intelligence through the virtue of speed alone (electronic circuits function a hundred times faster than biochemical ones). But intelligence is neither the source of everything we value, nor to safeguard everything we value. We are obsessed with thinking that this is the grandest way to live. Why not make it something beyond intellect?