The job-stealing AI myth #2: Evolution is a set of IKEA diagrams
This isn’t a snub to all the potential for encoding consciousness into AI, or the many fascinating hints it might one day be possible through neural nets or quantum computing. No, but all the same, we don’t understand consciousness or sentience in a way that means we can replicate it. Yet and possibly, ever. We know organic life evolved problem solving, self-aware intelligence, but we don’t know how… and we actually need to know how something works before we can create an AI to do it.
We know how driving works, so AI can drive. We know how language works, so AI can speak (kind of, Alexa). We know systems need to sync using accelerometer, air pressure and temperature data to keep an advanced fighter jet airborne, so AI can assist pilots in unstable machines that would be all but impossible to fly without computer assistance. Sure but, and it’s a big but, we don’t know how consciousness works. We have a lot of great ideas about how bits of it might work, but coding it into a computer system well enough to create a consciousness that might want to compete for a promotion with Larry from accounts? Nope.
Recent experiments with Box jellyfish show that they seem capable of navigation. Yes. Jellyfish. No eyes. No ears. No brain. No sense of sentience whatsoever except they can swim through holes, regularly enough for it not to be a statistically random occurrence. We don’t know how, just educated guesses. We could encode an AI capable of that kind of independence. Is it conscious? Who knows, it’s about as conscious as a jellyfish, I guess. Should jellyfish be worried about their jobs? No.
The idea that we know how to encode an AI that might decide, one day, that it wants something other than what it was designed for, implies we understand what wanting something is. Wanting is combination of socially constructed values, normative behaviours, psychology, and physiological responses to hormones and chemical reactions in your brain. That’s about as much detail as we have. If you strip out the society part, the psychology part, and the physiology part from that idea, you have got zip.
Specifically wanting a particular something, is vastly more complex than just general wanting, and differs from person to person. Quite literally, billions of different desires, with as many combinations of diverse influences that define what we actually want, and that’s just scratching the surface of a moment in the lived human experience. Again, from a coding perspective, you’ve got zip to work with. At best, you might code an AI that thinks like Larry from Accounts, on Thursday at 2pm. Not late night Larry. Or Sunday morning Larry. Or Larry after the lottery win, when he came into work and set his desk on fire as a grand resignation gesture.
This explains why an AI self-driving car will never decide that it hates being your chauffeur and commit suicide by driving off a cliff with you in the back. All it does is drive. All it knows, is driving. It will learn how to drive better and better each time. It doesn’t want anything, it responds to data inputs with data outputs. It’s not dumb, it’s brilliant, it learns, but only about driving. Murder or existential angst? Nope. Unless you upgrade it with the self-loathing murder software option.