Being a writer is a joy. Sitting through the night, thinking about all the things you can invent; fantasy lands, alternate histories of the black death, or a love story involving time-travelling partners who fall out of reality when they touch each other. How about turning a robot into a human? Will we bring in a Blade Runner type replicant out from the cold? People dream of Robot kitchen staff and going on dates with virtual women, men, or aliens. The prospect is an intriguing one to approach.
We have been anthropomorphising AI for decades, if not millennia (See Adrienne Mayor’s wonderful work), decrying miracles of technology as uninteresting, even if they might take over the world. Few intellectuals having been telling the cautionary tale about AI superintelligence. Those who have been; Nick Bostrom, Sam Harris, and Elon Musk, to name three, have mostly been ignored. This, I believe, is because we cannot help make the AI in our heads human, or animal in the least. We cannot comprehend a brain placed in anything that does not fit our idea of a conscious body. Our model is built from within, not given to us from without. That is why all our Sci-Fi and fantasy monsters will fit known archetypical forms; dragons, the alien, Hal, The Craken. All monsters have to be built from a human understanding of the world, made up from parts of the interior unconscious model. For that reason, unless we think hard, and maybe draw pictures, we will always imagine a futuristic AI which is human, or cephalopodic.
Even so, it seems ever more likely that whatever supersedes humans on planet earth will not be human. I do not mean to suggest that it will be a super-evolved human, though many influential drivers of the AI industry are attempting just that. We are still largely unchanged, having spent a measly time one planet earth, and there are problems with our ideas for speeding up evolution.
Dr Alice Roberts explained that there are many things we cannot change in our biologies, and we should not expect it to ever be possible. If certain parts of the body were altered in any way, then the whole body would fail. The parts make up the whole.
We need to understand that our biology is not going to change wholesale so we can fly, swim underwater for hours, or change the capacity of our hearts. We may build attachable tools for that purpose, but it does not follow that a tool necessarily becomes an extended phenotype, and it certainly would not alter our biological procedural routines in any extreme way. We may alter our conscious experience of our bodies. There is work underway to create implants to fit into the brain’s somatosensory cortex — the area which maps out our body in the brain. Soon, Amputees may be able to feel a prosthetic leg or arm, a real part of their body (part of the internal model), although musculature, skin, metabolism, and blood flow would all ignore the fact.
Humans may augment ourselves to a certain extent, but we cannot expect to become a new species in the way biologists understand the process — as one of embryology and reproduction — in the next century, or three. A more realistic possibility in the next century is that AI will become sentient, if not conscious. To illustrate the difference between the two states we can turn to octopuses.
Scientists know that octopuses would have a vastly different consciousness to humans if they are conscious. Nine brains operate independently of each other in the octopuses body, as studies have shown that each brain in the eight legs can operate without sending messages along axons connected to the brain in its head. In our bodies, all our conscious and sentient acts are at least mediated by the central brain. The motor neuron could be considered semi-independent middle man, in the sense that it can operate without sending a message to the brain, however, the brain does instruct the motor neurons as to which messages it can send, as per the interior model of the world.
Studies have shown that these messages do not always occur in octopus nervous systems. Each octopus brain in the body can act on sensory interaction without consulting others. Anil Seth, a neuroscientist, argued that if we want to think of an animal that is analogous to an alien from beyond the atmosphere, then we should look no further than the octopus. They live where we cannot, and have vastly different nervous systems to us, and importantly, are sentient, if not conscious. They are very clever; will wait until researchers are watching to dump unwanted food in wastewater shoots of lab tanks, guide researchers around their habitat while pointing out interesting objects with intentional prods of their legs, can change the colour of their skin at will to express emotion and often use the skill to deceive other octopuses, and have been known to wave to divers and wait for them to follow.
Considering octopuses as an analogy, it would be impossible to argue that AI could not be sentient or conscious in a way that we do not understand, even if we build AI superintelligence ourselves, because it would have a vastly different body, nervous system, language and social network to us. Antonio Demasio claims the body gave humanity consciousness, gifting physical emotions to the brain that needed to be turned into feelings to act upon (through a process of biochemical homeostasis). By extension, AI consciousness will be utterly alien to our own, buried in a vastly contrasting bodily shell. Humanity will be completely separate species, whether we are inferior or not.
Memory compounds issues with creating human-AI models. We know that AI and humans will have different memories. Human memory is astounding for two major reasons. The first is that it is bountiful. It may not account for every day of a person’s life (though people with this affliction do exist), but it has remarkable depth. Aboriginal Australian oral narratives, dating back 9000 years, have been corroborated archaeological findings. We can carry whole civilisations on the back of an expert person’s memory. It requires training and dedication, but bountiful historical memories have been common in the past and present. Even today there are examples of people who show just how wonderful human memory is, in the form of ‘memory champions’, people who compete to remember things against other memorialists. What soon becomes clear is that if we put our brains to the task, humans can remember far more that is often imagined.
On the other hand, our memories are fallible, so much so that researchers have shown that they can implant false memories into deeply personal histories, and they stick. Elizabeth Loftus is the most famous proponent of false memory implants. She and her colleagues have been able to convince people that they committed a crime in their teens, saved a cat that was stuck up a tree, or were lost in shopping mall as a child. All the scenarios had not happened, and yet people genuinely believed that they had. In more recent neuroimaging studies, Loftus and her colleagues have begun to showcase the robustness of the implanted memories in the brain. A possible cause for the fallibility of memory is the fact that the brain’s memory of the world is selective.
We tend to remember the broad strokes of an event but not all the details. This makes sense and hits on what may give us an advantage over AI systems, at least in the beginning. The reason the brain would not pick up details may be that there are too many details to be bothered with. They are not important and get in the way of the true meaning of the event. It does not matter if you constantly misremember quotes from your favourite movies so long as the experience is vivid, true and continues to make you laugh or cry. If your mother was wearing a red jumper, rather than the green one you remember, it probably doesn’t matter. Human memory is driven to what is emotionally important. We can shape memories around important incidents in the retrospective filing cabinet of consciousness. Our reality is shaped in advance by our unconscious model, but we do get to edit the experience to some extent. Some detail is inevitably lost in the process.
Once again, we have an example of how AI cannot be human. AI will probably be something far more dangerous and less flexible. A computer’s memory of what is in its hard drive is perfect or completely void of meaning. The computer does not ask you what iTunes is, only to remember later. The files and documents are either installed or they are not. The same will be true of information. The way a machine will prune what is important will not be by feeling through memories consciously in retrospect. It will almost certain prune them using an algorithm designed to answer some kind of logical question, that fits its programming. Even a conscious machine will have restrictive programming, though I imagine they will doctor their wiring when it is convenient. Memories will be viewed as valuable if they provide beneficial intelligence, whether it is interesting or not. A computer will look at every detail, and file them away for future reference. It makes sense and will be useful when AI takes over the world, but it comes at a cost.
Every single detail becomes an object, immovable, unchangeable, and inflexible. When a robot needs to understand the emotion of an event, it will not be able to do it like a human, by giving in to the overarching narrative. A computer will have to go through each detailed moment, cross-reference it against the past and future, and form a conclusion. There could be nothing further from the human way. We are storytellers, dramatists, teary-eyed memorialists, and excitable futurists. Our memory is built to trim the irrelevant edges, which means we miss somethings that might be important, in favour of reaching the nugget of the moment which makes us feel something. Life is a balance of costs and profits, and memory is a perfect example of the human answer to the conundrum. AI can make an eventual profit from every detail, but by doing so, it loses any chance it had of becoming human.
The quest to create AI superintelligence is ongoing, and it is not clear how it will end. Nick Bostrom warned of monstrous, insidious systems that will take away the planet and leave us in the equivalent position of a worm. Max Tegmark conducts research, attempting to make AI beneficial to humans. The AI Delusion outlines exactly how essays like this one are potentially mistaken, arguing that AI is not the threat we think it is. Sam Harris warns that no matter long it takes, we will create AI super-intelligence that will take over the world, it is only a matter of time. All those arguments are strong in their way, and the debate outlines the fact that we are unsure of what the future looks likes.
Whatever the AI trajectory, it is of paramount importance that we prepare for the future we will undoubtedly create. The only thing I am certain of is that we cannot create an AI system that becomes human, in just the same way that we cannot expect humanity to suddenly transcend its biology in a less than a century and become a new super-species — despite the possible future successes of the longevity movement. The planet, solar system and inhabited space will eventually belong to AI, humans, or perhaps both. AI will, however, be its own animal.