Artificial Intelligence
The essence of AI is algorithms and analogous to our IQ, it can improve when data, i.e. experience, is fed through the algorithm. Deep learning models are essentially mathematical algorithms implemented in codes to compute weights and adjusted accordingly based on the data fed through it.
Looking closer into the essence of AI, there is no doubt it can never have free will. It cannot step outside of the system and decide against the algorithms nor the data it had been given. Unlike AI, human can generate data intentionally or unknowingly. No matter how much data it has learned from, and how “intelligent” the algorithms are, the system simply cannot think and decide to act against the programme nor produce genuine and meaningful data. While a human can simply step outside the programme and decide against the logic, machine can’t. Such moments of epiphany rarely has any logic with some outcomes as brilliant as Nobel prize winning quality and some as disastrous as an atomic explosion. Adding to this argument is Sir Roger Penrose in his book, “The Emperor’s New Mind”, argues that known laws of physics are inadequate to explain the phenomenon of consciousness. He also argues that computers today are unable to have intelligence because they are algorithmically deterministic systems. This argument might in effect invalidate the term “artificial intelligence” since intelligence can only be attributed to a human being in this case.
Can AI make ethical decisions?
The answer is “No”. This is because in order to make an ethical decision we need to know what is right and wrong and then have the free will to decide against whatever is the “right” thing to do. These considerations have a deeper meaning that requires understanding. Take for example the sentence “There is no such thing as a just war”. This is an extremely complex moral statement to decipher by any human consciousness let alone a neural network that rely on algorithms to find a solution. These kinds of complexity in computer science is known as the NP-complete problems aka nondeterministic polynomial time problems. Assuming we created an AI programme to decide if the war we intended to fight is a “just war” to help politicians make better decisions. This means we need to define what a “just war” is and then feed the system with data to train the algorithms. Do you think we are able to train the AI programme to make an ethically perfect decision? Interestingly, we can use heuristics to address such requirements but the output is probabilistic and not absolute, which ethics is absolutely. Actually this brings up a whole new perspective on ethics, we shall leave it for another occasion for a more thorough discourse.
What happened when machine makes a mistake?
Of course, AI can make mistakes just like a human but the question of attribution and responsibility is less straight forward. Although this seems to be the case, I’m not sure if it is any different compared with the case of the Boeing 737 Max air disaster saga. Referring to the diagram below courtesy of BBC News, the fault lies heavily with the MCAS system and how it works.
The MCAS system is an installed “intelligence” in the plane to prevent stalling during flight and it prevented the pilot from doing the right thing when the system failed. The system was designed to comply with safety measures required as a result of the new design of the engine position which re-design was the result of cost reduction. As complex and as disastrous as it is, the cause of the disaster was investigated and it clearly pointed at the wrong decisions (on hindsight) made by humans along the way. In this case the software (algorithm) received the wrong data and acted against the right decision made by the pilot. The mistake quite clearly was not an autonomous function acting illogically against the logic of the software. It was a mistake from the programming standpoint not to provide an “easy” option for the pilot to overwrite the function to enable the pilot to take over the control. Although the machine made the mistake, the attribution of responsibility has to lie with human making the ultimate decision. Any AI system that has wide-ranging public safety effects will need strict scrutiny before it is being released into the public for use. Unfortunately in this case it didn’t.
Final Words
Free will and AI are as different as chalk and cheese. The first is totally unbound by anything including God Himself and can act against logic, while the other is purely driven by algorithms with no option. If you feed a system with erroneous data it will produce erroneous results. However, being involved with APARA as the chairman of the Ethical and Responsible Use of AI strategic committee, we are working towards designing a Trust framework that might one day become an ethical AI that checks the decisions other AI systems make. This ethical discourse will be presented during our AIBotics 2020 Global Conference on August 2020, come join us and continue our dialogue in person. Till then…
Credit: BecomingHuman By: Chuan Hiang Teng