Part #2 of “Say Ethical Framework One More Time, I Dare you” The real ethics of AI systems (not the killer robot kind)
There was a time when the concept of ethics and AI converged only in science fiction, like Isaac Asimov’s famous I, Robot. Ahhh yes, the three laws of robotics, which he dreamt-up and then wrote a novel devoted to illustrating how useless they were. As follows…
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Nobody should underestimate the profound reach of those laws. They have forged a mindset that machines, especially those driven by autonomous, intelligent systems, are a) in need of moral guidance and b) somehow engaged in dangerous activities where humans might come to harm, or robots might disobey human commands, where the machine itself is at risk of damage. What the hell did Asimov think these robots would be doing? Farming genetically engineered velociraptors?
Whatever it was, it’s clearly not “Alexa, set alarm for 6.30am” or “Hey Siri, add dentist appointment to my calendar.”
Still, people talk about ethical frameworks for AI, to ensure it doesn’t enslave mankind or whatever. In reality, there is a need for ethical frameworks in the world of, but it’s not like that.
So next time someone raises the e-word as an objection to dampen down your enthusiasm for smarter machines, surprise them with a proper ethical discourse, not a chat about various science fiction tropes from last century.