But will the human master allow AI to be programmed with strict and unwavering ethical decisions?
The car should steer off the cliff and save the children for the greater good of society. AI can make a split-second moral calculation (Moral Maths) with its computational prowess to make a better decision than a human can in this situation. Can such an algorithm exist? Of course. AI algorithms once trained with good ethical outcomes as its ultimate reward function are unwavering and consistent. It will never change its “mind”. When it is taught into the system, it simply executes. An autonomous car driven by AI has no conscience nor will it regret making such a selfless decision saving the children and not the driver. The machine learning algorithm learned from a state-action-reward function to serve the greater good if it was the preconfigured goal of the AI system. Good moral outcomes can reflect the original intentions of the designer and the user. Let’s explore such moralistic driven system further.
“Moral maths is defined as using mathematics to determine if a decision which is made is morally correct or is immoral.”
Source: Teng Chuan Hiang
Let’s say the manufacturer gives the owner moral options to configure the driverless car like the one shown below:
“Moral Options: Before using your driverless car, please configure your moral decisions:
Option A — Save others at all cost
Option B — Save driver and passengers at all cost
The moral configuration of an AI system should be openly declared in clear terms. Potential actions by the AI system that have moral implications should be stipulated in the user manual. It is imperative that the manufacturer explains to the owner who purchased the car how the moral agent of the system makes decisions when presented with different scenarios. Users of AI systems have the right to know if the algorithms are morally aligned with the users’. Just like how food labelling works although a bit more complex than that, consumers or users should be fully aware of the AI system’s compliance with safety standards and beyond if more options are provided. Minimally it has to comply with the laws of the land, yet giving options to users to hold a higher moral standard if he so chooses. This means legislation must be clear on what is mandatory for declaration and what is optional when implementing an AI system. Incidentally, the above might not even be legal, but that would be another topic of discussion altogether.
1. AI for CFD: Intro (part 1)
2. Using Artificial Intelligence to detect COVID-19
3. Real vs Fake Tweet Detection using a BERT Transformer Model in few lines of code
4. Machine Learning System Design
AI Governance – a double-edged sword
Many countries are implementing the accountability-based approach to regulate the use and implementation of AI. Singapore announces our Model AI Governance Framework 2nd Edition earlier his year to help organisations think more responsibly when implementing AI. Having read the framework, I sense the policymakers’ tendency to give more room for AI innovation to flourish in Singapore. From a business standpoint, this is good for the economy, but as a consumer, I don’t agree with this risky approach. If AI is not knowledgeably and responsibly regulated, it can create more distrust. People can easily opt-out from sharing data since it is within their control thus removing themselves from the risks AI poses. This will eventually stifle the blossoming of AI within the economy.
Subsequent to the publication of the ISAGO paper presented in Davos by the Singapore government, I published our Trust in AI Framework to help practitioners and policymakers codify ethical standards to build better trust between the three constituents of society. This framework was our first attempt to build the linking “infrastructure” between the major stakeholders. I’m sure we want to get our feet wet early to stay in the game using this wonderful technology, but we cannot do it without imposing good ethical constraints to earn the trust from the consumers. The Cambridge Analytica saga, the Russian interference in American politics using Facebook and Google data privacy violations are clearly putting people’s trust to the test. Compromising trust or slowing down the adoption of AI will hamper growth. But if we need to take a side between risking AI adoption rate or forging public trust, we should choose the latter. Trust is extremely hard to earn back once lost.
Ethical Decisions in AI
We make decisions every day and some of those decisions have moral implications. When a decision has moral implications, it is such a decision that matters the most in life however small that decision may be. Making a morally wrong decision will corrupt us eventually. Everything else that follows a good moral outcome can only benefit society and self. If the intention is morally good, the materialistic outcome, if it is not desired, can be changed with possible reversible ramifications. Trust is never lost in such outcomes. Therefore, we must design AI with ethics deeply embedded in the system.
Obviously we must have a reference point of what is right and wrong before we can judge the outcome. Edifying an AI system may not be as hard as we think. But if we want to have the cake and eat it, then we will complicate the moral outcomes and run the risk of convoluting ethics. We must avoid such a path at all cost or AI will only make our lives worse and more complicated. Machine learning concepts like reinforcement learning should be used to help ensure AI behaves ethically. At its core, the basis of this concept is the “state-action-reward” to achieve its goal. The reward can even be delayed when implementing reinforcement learning which relates closely with the characteristics of moral education.
Another concept as AI practitioners must grasp and experiment is Inverse Reinforcement Learning. I made an attempt to understand this learning methodology and it seems akin to teaching a child morality using either the carrot or stick method. It learns from human to formulate policies to achieve rewards along the way and ultimately its goal. If we teach the machine the unbreakable moral values by giving the system strong rewards to move from one state to another always leading to the best moral outcome every time, we can build an AI system that can do no wrong. Provided the algorithms is strictly prohibited from learning the concept of committing immoral actions to achieve moral outcome or goal. In this regard, the manufacturer should make proper declaration the moral training the AI system has received. User should be fully warned before using about its moral configurations.
Inverse reinforcement learning (IRL) is the field of learning an agent’s objectives, values, or rewards by observing its behavior.
Johannes Heidecke said “We might observe the behavior of a human in some specific task and learn which states of the environment the human is trying to achieve and what the concrete goals might be.” (source)
If 100 people say its right to save the driver and sacrifice the 5 children and one says wrong, the 100 is not right morally speaking. Human being knows when something is morally wrong or right and if the projected outcome will be likely good or evil. Unless I’m absolutely clear on the moral outcome of an action, I will not act on it to avoid any potential harm it may cause. Therefore, we should train the AI system to specifically arrive at a morally correct decision adhering to the prevailing laws. It must have the altruistic moral outcome as its primary goal to minimise the potential harms its actions might have. We must invest more time and expertise to perfect Reinforcement Learning (RL) and Inverse Reinforcement Learning (IRL) in the context of ethical applications. We should make it mandatory for every AI system to use these methodologies to ensure a 100% ethical outcomes for all AI systems.
The Ethical Black Box
An excellent concept is The Case for an Ethical Black Box proposed by Alan Winfield and Marina Jirotka in their citation. This concept is the same as the Flight Data Recorder, capturing all the pertinent status data as the AI system makes decisions and take actions. The objective is to provide forensic evidence when an investigation is required due to an accident or mishap that might have broken the law. The aircraft industry has proven that such a data recorder is vital for future investigation when disaster hits. This same concept is also useful for researchers to improve their algorithms by analysing how the system performs in a real application. But when either Convolutional Neural Network or neural networks are used, it may not be as easy to capture and understand as the expert says. I beg to differ philosophically from a scientific standpoint. It is most probably with the advancement of science and technology, that we can capture the information made by AI systems using the black box. The problem now is the value of this concept is not well understood until bad things happened. I hoped as an anti-scientism proponent, am not committing the cardinal sin of postulating only science can explain and determine normative and epistemological values.
Now imagine the quote above and the AI declaration is made clear to the car owner when the buyer purchased it. Anyone who purchased a car with the AI moral configuration above is obviously looking at preserving himself and his passengers as the top priority. Mercedes position from the above statement quite understandably and obviously appeal to the owner. But I’m not sure if it will appeal to me since this is a chance for a heroic act that I can pre-configure in the car despite my possible cowardice when it actually happens. Moreover, the AI system doesn’t even know if death for the driver is imminent. Yet, should the AI system take priority in saving the car’s driver or saving more people when faced with such a dilemma? I might agree the right thing is to save more people, but this is a difficult moral predicament nonetheless. Any car manufacturer or any AI designer and implementer must fully apprehend this issue, not just the legal implications, but also the moral implications. The consumers’ appetite for higher morals might seem right on paper, but in actuality, it may not be the case. These kinds of moral implications will complicate AI design and implementations. Being selfish to save oneself is certainly legal unless it becomes illegal. Being greedy is not illegal unless greed leads to breaking of some laws like resorting to fraud or cheating. Therefore, I would recommend that any legal systems must mandate the declaration of moral considerations performed by the AI system to ensure that the user or owner is fully aware of its potential to commit right or wrong. Proper documentation should be submitted to the right authorities for review and testing should be performed to ensure the authenticity of the moral specifications.
Now imagine if AI is designed not just to fulfil legal requirements but also achieve higher standards of moral obligations. The outcome will yield more good to veer off the road with the risk of killing the car’s driver and the passengers if more can be saved. This is of course assuming that the AI has such capabilities. This includes taking into consideration if there are more people in the car than the potential accident might cause to the pedestrians. A more intricate problem is whether such AI capabilities is mandatory before being certified to be roadworthy and has passed safety testing. Machine and deep learning are often described as a black box — for this reason, it is even more important that if any decisions that have moral implications be given the appropriate learning algorithms. It should defer to an overriding algorithm that makes only morally right decisions. I strongly believe in every moral decision when presented to us, we can choose the right one that serves more good. If we can’t, then so the machine should not as well. As mentioned earlier using Reinforcement Learning or Inverse Reinforcement Learning we can achieve the ultimate goal of creating a perfectly moral AI system for a perfect world. AI can augment human capabilities to make better moral choices.
“If AI is to thrive in societies, trust must be established from the beginning. They must be taught ethical lessons along the way (learning).”
AI is going to expose the morality of people and make it public. Human consciousness has always been private and inaccessible until it is revealed in actions. But immoral actions can be concealed by lies and dishonesty, not in the case when moral maths is used in AI with an ethical black box.
I’m a conduit of His wisdom.