The Self-driving Ethical Dilemma
Arguably you could blame the misreporting of ethical discourse for causing a panic over AI and machines. Clearly, there is a need for ethics in the working practices of pretty much every industry, service and product. Nobody says “let’s be unethical about this” and gets away with it (unless they work in a capitalist free market economy, advertising, automotive manufacturing, airlines, farming, logging, mining, construction, pyramid schemes, marketing, the internet, finance, banking, the music industry, the media or the law. Or politics. Oh, wait… every industry has its fair share of ethical problems. Phew. For a moment there, I thought it was just the killer robots of the future we had to worry about.)
Ethical enquiries in the area of AI, like MIT’s excellent social experiment The Moral Machine, are gathering cross-cultural, multigenerational opinions of what machine ethics should be. Which is awesome. In the Moral Machine example, it puts a self-driving car into various scenarios where the machine must choose between crashing and killing its occupants, and crashing and killing pedestrians. A classic moral dilemma, based on Philippa Foot’s 1967 version of this traditional problem.
It is fascinating — and highly valuable — work in the field of moral philosophy and ethics. However an unexpected consequence of that sort of project is people hear about it, and then they think this is actually something self-driving AI designers should be concerned about. If it is, it would only be because they’ve designed a really, really bad self-driving car. With murder on its mind.
The practical truth is we don’t need a whole new ethical code for AI. We just don’t. What we need are credible products with credible safety features, not a robot that digs Emmanuel Kant.
Why Not?
In the hypothetical self-driving scenario, only the brakes have failed, the rest of the vehicle and its AI brain are all working perfectly. However its choices are surprisingly limited, right? Crash into A or B, kill occupants or pedestrians. No other options. It’s choose who dies. Wait, what? Real life crashes aren’t like that.
If you’ve ever had a car accident where you realised, however briefly, that it was coming, what did you do? Slam the brakes on and skid into something? Turn the wheel and do a doughnut? Flip the car by accidentally hitting the kerb? Did you decide who or what to hit? Or was there only attempted collision avoidance, or blind panic. Or both? Were you in control? No? Of course not, it’s an accident.
I once lost control of a vehicle on a wet, muddy road in Patagonia. As it skidded towards a fence, steering not responding, I still had my foot pumping the brake pedal, hand pulling on the handbrake, and kept steering. It was worth a shot as opposed to curling into a ball and meekly accepting my fate. Like most people, I tried to minimise the impact. Brakes on. Steer away. Whatever. It may have been pointless. Who knows. It was an accident.
This creates an interesting option for the machine ethics debate which lifts it out of ethics and towards product design considerations — DEPLOY SYSTEMS TO MINIMISE THE CRASH. Yes. Right up to the last millisecond, keep trying to slow the vehicle down and minimise the impact, and deploy the safety systems like airbags for passengers and external soft body panels for pedestrians. A whole raft of options and innovations are available, none of which are as drastic as choosing which humans must die in the deadly zero-sum game known as commuting.
Of course in this scenario a crash could still be fatal, however the designer of the self-driving AI could at least sleep at night knowing they designed a system that did everything it could to avoid the accident or minimise it, as opposed to pondering if their ethical framework was right to kill Ms. Johnson in the back seat to avoid hitting Little Mary-Sue on the crossing.
There is an argument to be made that if both Ms. Johnson and Little Mary-Sue wind-up in hospital with serious life-changing injuries, that’s a better outcome than one dead, one alive. I am reminded of John Rawls’ A Theory of Justice at this point. Which world would you rather live in, one where you’re either dead or alive in the self-driving crash scenario, or one where you’re both badly messed-up but alive and in the hospital?
Credit: BecomingHuman By: Andrew Keith Walker