Credit: AI Trends
By Jayshree Pandya, Founder and CEO of Risk Group LLC
This is an age of artificial intelligence (AI) driven automation and autonomous machines. The increasing ubiquity and rapidly expanding potential of self-improving, self-replicating, autonomous intelligent machines has spurred a massive automation driven transformation of human ecosystems in cyberspace, geospace and space (CGS). As seen across nations, there is already a growing trend towards increasingly entrusting complex decision processes to these rapidly evolving AI systems. From granting parole to diagnosing diseases, college admissions to job interviews, managing trades to granting credits, autonomous vehicles to autonomous weapons, the rapidly evolving AI systems are increasingly being adopted by individuals and entities across nations: its government, industries, organizations and academia (NGIOA).
Individually and collectively, the promise and perils of these evolving AI systems are raising serious concerns for the accuracy, fairness, transparency, trust, ethics, privacy and security of the future of humanity — prompting calls for regulation of artificial intelligence design, development and deployment.
While the fear of any disruptive technology, technological transformation, and its associated changes giving rise to calls for the governments to regulate new technologies in a responsible manner are nothing new, regulating a technology like artificial intelligence is an entirely different kind of challenge. This is because while AI can be transparent, transformative, democratized, and easily distributed, it also touches every sector of global economy and can even put the security of the entire future of humanity at risk. There is no doubt that artificial intelligence has the potential to be misused or that it can behave in unpredictable and harmful ways towards humanity—so much so that entire human civilization could be at risk.
While there has been some — much-needed — focus on the role of ethics, privacy and morals in this debate, security, which is equally significant, is often completely ignored. That brings us to an important question: Are ethics and privacy guidelines enough to regulate AI? We need to not only make AI transparent, accountable and fair, but we need to also create a focus on its security risks.
As seen across nations, security risks are largely ignored in the AI regulation debate. It needs to be understood that any AI system: be it a robot, a program running on a single computer, a program running on networked computers, or any other set of components that hosts an AI, carries with it security risks.
So, what are these security risks and vulnerabilities? It starts with the initial design and development. If the initial design and development allows or encourages the AI to alter its objectives based on its exposure and learning, those alterations will likely occur in accordance with the dictates of the initial design. Now, the AI will one day become self-improving and will also start changing its own code, and, at some point, it may change the hardware as well and could self-replicate. So, when we evaluate all these possible scenarios, at some point, humans will likely lose control of the code or any instructions that were embedded in the code. That brings us to an important question: How will we regulate AI when humans will likely lose control of its development and deployment cycle?
As we evaluate the security risks originating from disruptive and dangerous technology over the years, each technology required substantial infrastructure investments. That made the regulatory process fairly simple and easy: just follow the large amounts of investments to know who is building what. However, the information age and technologies like artificial intelligence have fundamentally shaken the foundation of regulatory principles and control. This is mainly because determining the who, where and what of artificial intelligence security risks is impossible because anyone from anywhere with a reasonably current personal computer (or even a smartphone or any smart device) and an internet connection can now contribute to the development of artificial intelligence projects/initiatives. Moreover, the same security vulnerabilities of cyberspace also translate to any AI system as both the software and hardware are vulnerable to security breaches.
Moreover, the sheer number of individuals and entities across nations that may participate in the design, development and deployment of any AI system’s components will make it difficult to identify responsibility and accountability of the entire system if anything goes wrong.
Now, with many of the artificial intelligence development projects going open source and with the rise in the number of open-source machine learning libraries, anyone from anywhere can make any modification to such libraries or to the code—and there is just no way to know who made those changes and what would be its security impact in a timely manner. So, the question is when individuals and entities participate in any AI collaborative project from anywhere in the world, how can security risks be identified and proactively managed from a regulatory perspective?
Jayshree Pandya , is Founder of Risk Group, Host of Risk RoundupPodcast, Author of The Book, The Global Age & a Strategic Security Advisor.
Read the source article in Forbes.