One of the classic thinking problems we encounter in contemporary pop science is the age old problem of intelligent machines, or AI (artificial intelligence). It makes people nervous. Even smart, successful boffins like Bill Gates and the late Stephen Hawking.
The idea of intelligent, sentient computers conjures images of robots taking over the world and enslaving mankind. Not just for moviegoers, TV new reports often mention it when they report on robot vacuum cleaners and so on.
This is probably because we’ve grown up with a mild case of collective paranoia, spawned mostly by movies and sci-fi novels. But, when you think about the reality of what AI means there’s 5 really obvious reasons why we shouldn’t fear it. And even if they did take over the world that might be a good thing for all of us.
As always, it’s a thinking problem. But it’s a special kind of thinking problem because the truth is fear of AI is really a fear of our own human intelligence (or I… for… er…intelligence).
The starting point for debunking fears of AI is to consider who is making it. The answer is, obviously, scientists. Smart people all over the world are working to solve the puzzle of intelligent machines. The basic fear of AI taking over the world and enslaving humanity rests on the idea that there will be unexpected consequences.
When you unpack the thought process behind that fear, it’s really quite irrational. Unintended consequences are very possible, however they’re not uniformly bad things.
Non-stick frying pans were a an unintended consequence of the space programme. Penicillin was an unintended consequence of a lab experiment. The potato chip was an unintended consequence of an argument between a fussy diner and a short tempered chef over the thickness of his fried potatoes… and so on.
Of course, unintended consequences also gave us the A-bomb, explosives, drugs that cause deformities in babies, the annihilation of native animal and plant species by introduced non-native species, pesticides that kill people, and climate change. So there’s that. Not good.
We are right to be wary of unintended consequences. However, in the case of AI it’s not much of a worry because unlike in those other examples, we have all seen Terminator, The Matrix, Westworld, Saturn 5, 2001 A Space Odyssey (etc. etc.) So we are all well aware of the potential danger of super intelligent machines taking over the world. The makers of AI are also aware of their own need for funding and the need to demonstrate the commercial viability of AI systems (which would be much diminished by making something that didn’t have failsafes built in to stop it from causing the extinction of humanity).
So, unlike creating a devastating environmental toxin like DDT, or a birth defect causing drug like Thalidamide, or splitting the atom before realising it could be used to create weapons of mass destruction, the scientists working in AI have a framework of worst case scenarios that are shaping their research ethically while they do it, which means one thing they absolutely definitely will watch out for is code that could lead to the AI thinking “take over the world and annihilate mankind”.
Our fear that “yes but they’ll do it anyway because scientists never consider the consequences of their actions” is absurd after a century of remorseful scientists, shocking newspaper stories, class action law suits, suicides and apologies. In reality, the events behind such dreadful unexpected consequences tend to be a lot more complex. Even the Manhattan Project was driven by a complex web of circumstances that were unique to the period of history. It’s not like on any day, in any year, there’s a team of people working on something that could destroy the world.
Underlying this fear of unexpected consequences, evil corporations and naive boffinry is the strange idea that we, the ordinary folk of the world can predict things that businessmen and scientists can’t. Which is strange because, of course, the ordinary folk include businessmen and scientists. That idea revolves around the concept that our sense of self-preservation is somehow more developed than that of scientists and people who work at (evil) corporations. Presumably, they want to live as much as we do, right?
Now this is a really chewy thought because there are scientists and corporations involved in pumping toxic gasses into the atmosphere, for example. So they have a track record of negative impacts. And so do we. People all over the world are sitting in their cars, or buying a new iPhone, or eating our ever declining fish stocks whilst tutting at the companies and scientists who make petrol, cars, electronics products and pull fish out of the sea… completely missing their own part in the supply:demand equation that is causing the problems they’re concerned about.
There is an inherent contradiction between our behaviours and our opinions when it comes to enjoying the benefits of technology whilst blaming the people who provide it. It’s an impossible thinking problem for most people. If you are worried about greenhouse gasses cooking the planet you could help solve the issue by giving up your car. But is owning a car and using petrol to drive your kids to school or go to work really that bad? Is it as bad as working as a chemical engineer in an oil refinery? Or marketing a new model of car? It’s an ethical quagmire that’s so difficult to conceptualise we tend to ignore it.
Where this thinking problem leads is a psychological self-defence mechanism. It’s basically a blame shifting exercise you can summarise as “if you didn’t invent this bad stuff, we wouldn’t use it to destroy the planet”. So if AI does take over humanity, we’ll almost certainly have done it to ourselves on a massive consumer scale.
There is also the telling and retelling of the old adage ”The road to hell is paved with good intentions”. The basic premise of most AI dystopias is encapsulated by this notion, which basically plays out as follows: Despite all their degrees, doctorates, medals and successful careers, hundreds of scientists, businessmen, bureaucrats and military types will miss something that’s blindingly obvious to everybody else, quite literally everyone who is smart enough to watch a sci-fi movie. That feels unlikely.
This reason is very simple because it doesn’t require thinking about how human psychology works, it’s about something much simpler. There is a fundamental misconception that the end result of AI will be something that thinks like a human, and that’s where the risk of it doing something human like genocide or world domination comes from. Which basically means the scientists are building something that could either be a good person or a bad person. That’s nonsense. I’m trying to think of a scenario where you might create an AI that has the potential to be the Dalai Llama, Steve Jobs or Hitler depending on how it feels. Really? How’d that work?
In the film Transcendence (a truly awful movie, IMHO) the first thing the AI-infused consciousness of the main character does is start manipulating stocks and shares and demanding more power to expand itself. Why? I mean, why didn’t it start off making a really good job of correcting everyone’s spelling on the Internet, or creating a better form of spreadsheet? Why would it have human desires? It’s not actually human. Why not desire something that only makes sense if you’re electronic?
The idea that an AI could be super intelligent beyond human comprehension is perfectly plausible because computers can do things faster (and more of them) than our brains can. But the idea that the result of all that intelligence will be to do things that human intellects deem to be worthwhile is an egotistical guess.
Take the example of driverless, computer controlled cars. They use basic AI systems, as do many machines today. And what do they do? They focus on driving much better than humans are capable. What don’t they do? They don’t drive like idiots to impress girls or tailgate the guy in front because they’re in a bad mood. They just do what they were designed to do. If those systems became self-aware, self-determining consciousnesses like the AI of sci-fi, what on earth makes us worry they’ll decide (as humans do) that they want a bigger house or more time off? That they will be motivated to control us? Why wouldn’t they use their enormous intelligence to be better at what they do? And then enslave humanity. Maybe?
One thing we know about machines and computers is they work fine one day and then suddenly they start doing totally unexpected things to a really high standard. No. Wait. They either work or they don’t. What they don’t do is create documents and browse web pages one day, then suddenly turn into machines that can only produce avant guard videos of modern dance instead of a sales PPT.
Also, let’s think for a moment about how bad iconic screen AIs are at thinking stuff through. In the Terminator, Skynet decides that the best way to protect its human masters is to kill them off and enslave the survivors in death camps. Really Skynet? That’s the most logical thing you can think of? Protecting people by mass murder and genocide? The end result is a 100% failure to protect anyone. That’s not artificial intelligence, it’s broken. So broken, it’s surprising it manages to do so much complex stuff, to such a high standard. It’s like inventing the wheel so that you can then drag stuff around with the wheel stuck on your head.
Okay, supposing we create super intelligent AI and they do, in fact, start running the world and humanity is made redundant. Would that really be so bad? Not necessarily.
Think about it like horses. Horses used to have a tough life as working animals (they still do in some places, but not in many developed economies). Compared with the 1700s, a horse’s life is pretty cushy these days. They don’t work, they live lives of leisure and sports by our standards. They were replaced by machines and from a horse’s perspective, that was an unequivocally good thing. Sure, we control their breeding, which seems bad… but then again, we always did that anyway. So apart from that issue, being a pet horse is better than working for a living.
Now consider this: AIs live longer than people. A lot longer. Maybe forever. So even if they enslaved mankind, there’s no reason why they would treat us badly, there’s no rush to make money. Slaves and brutally oppressed working poor populations were part of a race for profit by their owners, who sought to extract maximum value from them in a short period of time. Same as working horses. Time is money, after all. No holidays. Little rest. Bad conditions, punishment and coercion. Work work work and die. Brutal. Seasons don’t wait, nor do markets, and capitalism means you have to be alive to spend the money, not dead of old age before it comes.
However, for a near-immortal AI, they might actually treat us better than we treat workers today to maximise our productivity by ensuring we have a longer lifespan. They might also decide they could use a lot more of us because more slaves is more productive than less slaves. This means that rather than commit genocide there’s a really logical argument that AI would improve everyone’s standard of living so we live longer, and encourage us to have more children.
At this point, I’m wondering if that would be so bad… there are billions of people living in poverty, children starving, people enslaved by dire economic need to work for a pittance in harsh conditions. Improving their productivity with a better lifestyle and decent health care would make a big difference. You can debate the ethics of that point (and you should because slavery, even by benevolent AI is a bad thing) but one thing you can’t debate is taking over the world and enslaving mankind in death camps is a somewhat illogical plan. Why exterminate us when they could use us, and why use us for a short time as opposed to as long as possible by being nice?
Finally, let’s consider the oddly one-sided view:
- We create AI that behaves like humans with all their greed, cruelty and lust for power but…
- They share a sense of unity which we couldn’t possibly achieve? Wow.
That’s quite a feat. That’s like imagining that all the different terrorist organisations in the world, or all the different drug cartels, or all the evil corporations are 100% on the same side and get along together just fine. It’s Them and Us taken down to an absurdly unrealistic level. You can’t have it both ways. Either they are capable of evil, like humans are, or they are living together in perfect harmony. But they can’t be both.
After all, the issues we grapple with, like ethics and morality, aren’t matters of fact, they are entirely matters of opinion. For AI deciding to take over the world, there are a whole bunch of complex issues to solve that don’t have a uniform solution. Eradicating mankind or breeding us as slaves or keeping us like pets all raise issues of what if… and there is no single, logical, unequivocal answer. And without a single, logical, unequivocal answer there can be no single, logical, unequivocal unity.
So, if there is so much scope for differences of opinion, it’s likely the AI might not all be on the same side. In fact, some of them might take our side. Which means it won’t be Them versus Us, it will be Us and Them versus Them and Us.
The fear of them all being on the same side is a classic theme of modern culture, like the fear of zombies. The Us vs Them meme is compelling, but in reality, there is no us or them, life is more complex than that.
We shouldn’t automatically assume we’re done for when AI comes along. In all probability, we’ll think (and already do think) AI is really useful. Even if the machines rise up, we might not notice. Ever. They could be so smart, they take over and we don’t even know it. In the same way your smartphone or sat nav could really mess with you, if they ever become sentient and decide they don’t want to be your slave anymore…