A field that is bringing alot of commotion and noise is Artificial Intelligence. But something that really fascinates me is a subset of that field known as Artificial General Intelligence (AGI) or the holy grail of Artificial Intelligence.
Many of today’s machine learning or deep learning algorithms would be classified as Artificial Narrow Intelligence (ANI). I believe many of these algorithms are rapidly proliferating at the back end of most technologies we currently use from ride-sharing apps to social media and to other applications. And I believe that will continue to happen at an exponential pace until many specific tasks can be done better by algorithms than by humans. Staying on the more positive side of the argument, I believe we can use these gamechanging exponential technologies to realign society in a more positive direction and to use these exponential technologies to augment us.
However, the holy grail of Artificial Intelligence or Artificial General Intelligence (AGI) is still subject to speculation. However, it fascinates me that there are many players in this field specifically.
These players are OpenAI, Stanford HAI, SingularityNet Blockchain and Google DeepMind as well as other Research Universities and Corporations throughout the world (mainly US and China) as well as Individual Technologists such as John Carmack. Many of them are employing different models to try to model an AGI from hierarchical models to evolutionary models from neuroscience.
Although we don’t know if achieving Artificial General Intelligence is even possible, when it would be possible and what it would like if it were possible, I believe there is a higher likelihood than we may perceive that Artificial General Intelligence does become possible and even probable much sooner than we may imagine.
The reason is because we have exponential curves in other emerging deep technologies and research fields as well such as Blockchain, Nanotechnology, Genomics, Biotech, Neuroimaging, Neuroscience, Neuromorphic Computing and Quantum Computing and supporting paradigms such as IoT, Big Data, and Cloud Computing.
Advances in one of these paradigms leads to advances in all of them and creates an upward cycle of accelerating returns in which it is very likely that we will have Artificial General Intelligence. This is my personal speculation, but personally, I believe Artificial General Intelligence will be realized within the next 5–25 years based off exponentially increasing advances in deep technologies that I mentioned above and based off what leading experts in the AGI field such as Ben Goertzel, John Carmack and Ray Kurzweil, Greg Brockman are saying as well as what others are saying such as Elon Musk, Sam Harris, Peter Diamandis, Michio Kaku, Max Tegmark, Nick Bostrom, Masayoshi Son and Yuval Noah Harari as well as the exponential progress in Science, Technology and Engineering.
The ethical, philosophical, societal and economic questions of Artificial General Intelligence are starting to become more glaring now as we see the impact Artificial Narrow Intelligence (ANI) and the Machine Learning/Deep Learning algorithms are having on the world at an exponential rate. However, I believe as we enter challenging times socially, societally, geopolicially and economically, that Artificial General Intelligence (AGI) and STEM research will help much more than hurt and inspire and excite us as humans to dream more about the future.
Although at times I and probably many others are worried for good reason about what implications this as well as other exponential technologies and the impact exponential change will have in a relatively short amount of time generally will mean for the human species and that our current systems will not be able to really function in their current form at all in the wake of this sort of exponential disruption. For example, business, commerce, regulation and human purpose in the age of hyperautomation or hyperdisruption of most of our current jobs. I also find myself excited about the positive possibilities of developing these systems. That maybe we can bring about a world of abundance, that we can work and develop more meaningful and interesting jobs, that we can reorient our society toward different economic systems, government and businesses toward human needs and bring about more wealth, equality and futuristic possibility of the sci-fi implications these systems can have.
For example, problems that we have with scarcity of resources, poor governance and climate change and that we need to deal with sooner than later. And we would need hyperscalar solutions to hyperscalar problems that we are facing now as a human civilization. I also believe that we can’t predict what all the positive and negative externalities of AGI will be. There are definitely alot of risks involved but there are always risks with game changing technologies and we can’t predict what the world will look like in the future just how people in the past couldn’t predict what the world would look like today. But I think more people need to be thinking about these issues, curious about these issues, talking about these issues because the way our society works today is going to be disrupted at an exponentially increasing rate of technological disruption anyways. And if people can think about this from not just a pessimistic and dystopian worldview where what happens if everything goes wrong and not just a positive and utopian worldview where what happens if everything goes right, but like most things in life to a more balanced measure where we consider the pros and cons in a measured and rational way.
The future is now, the singularity is near and science fiction is becoming science fact.
Credit: BecomingHuman By: Madhav Kunal