Credit: Google News
I’m getting rather tired of the way the mainstream media continually frame their coverage of machine learning or artificial intelligence in terms of the Terminator movies of more than three decades ago. The so-called “Terminator hypothesis” or “Skynet hypothesis” has been a constant in contemporary thinking about the development of artificial intelligence, popularized even by people of great prestige and influence such as Stephen Hawking or Elon Musk, who warn of a threat to humanity from killer robots.
The reality is precisely the opposite. Businesses are investing in advanced automation, analytics, process control and any number of other improvements, and not in machines capable of developing consciousness. Even when people express a preference for artificial intelligence over a politician, it’s in the context of controlled processes, not autonomous decision making or processes that could lead to some sort of war. These things only happen in the movies.
Machine learning is now capable of a growing range of tasks; whether they are good or bad, will depend on humans, but what is becoming clearer is the need for a multi-disciplinary approach, combining technology with more humanistic disciplines.
MIT’s multidisciplinary AI center unveiled five months ago is now joined by a similar initiative on the other coast of the United States at Stanford, reaffirming an interest in holistic and plural approaches to a discipline that is transforming our world. It is increasingly evident that teams dedicated to data science must include generalists and humanists able to provide a broader approach. The hypothesis of an error due to an overly technical or mathematical focus is now barely credible: most of the developments will be validated with this type of approach. And even if mistakes are made, and they will be, the consequences will not be apocalyptical. Again, that’s just Hollywood material. Cool, interesting and profitable in terms of entertainment, but that’s just about it.
In the same way that people mistakenly think that digital transformation is a question of technology, when it’s also very much about an organization’s culture and people, many machine learning and artificial intelligence developments will fail because they did not take into account other issues. Understanding this will be fundamental for the many broad-spectrum professionals who will be integrated into these multidisciplinary teams, who will not only have to have clear ideas and knowledge about artificial intelligence, but also propose reasonable, credible, and serious scenarios that do not start from bizarre hypotheses such as “the machine will acquire consciousness want to kill us all.”
Machine learning is a field that started long time ago. Computers were very limited back then. Now, machine learning is way more powerful thanks to the almost infinite resources available on the cloud. But even though, we are talking about advanced automation, not machines developing conscience. A computer — no matter how large — with a machine learning algorithm will never be a brain, nor is comparable to any type of brain. All this scaremongering about the “Terminator hypothesis” comes from pure, simple ignorance, and you quickly realize that the very first time you try to train an algorithm.
A machine and a brain work in totally different ways. With the current state of technology, machines are not able to emulate a brain. Algorithms deal with very specific situations and contexts, and you can’t just accumulate lots of situations until you have a fully functional brain. Current machines cannot develop conscience. Conscience is not something you can build like that: machines will play games, drive, find anomalies and many other things, in fact, they are already doing so, but in the current state of technology, they will not develop conscience. They will not become brains, and the comparison with a brain is just laughable.
This is an area that will attract more and more investment and attention in the coming years. This is, without doubt, one of the technologies with the greatest capacity to impact our lives positively, rather than turning against us and wiping us out. Please stop fearing AI and machine learning. As with any technology, machines will do whatever they get trained to do, and they will do so, in some cases, better than humans. Killer robots? That’s easy, and that’s why ethical principles are important here. But even if someone develops killer robots, such robots will not grow a conscience, develop a brain or understand who they are or what they do. That belongs to freakin’ science-fiction.
The sooner we start talking about AI on the basis of credible hypotheses by multidisciplinary teams and relegate the “Terminator hypothesis” to the field of science fiction, the better for all.
Credit: Google News