Is Humanity Creating God?
The existence of “God” is still a hotly debated topic in 2019. As an atheist myself, with quite a network of atheists on Twitter, I just have to open my Twitter feed to find debates about God, creation etc. Although I obviously don’t believe a God exists right now, I am very open to the possibility of one existing in the future. One created by humanity. I’m talking about a computer system so far beyond our own intelligence, it looks at us as we look at chimpanzees or even ants. Artificial Superintelligence might very well be coming this century, and many leaders (including billionaire entrepreneur Elon Musk and the late physicist Stephen Hawking) have warned us about the dangers, which include human extinction. Of course, as is the case with many technologies, the potential upside is huge too. What happens is up to us!
What, exactly, is an Artificial Superintelligence (ASI)? Well, as said before, it’s a system with an intelligence far greater than ours. Intelligence, then, as defined by Legg and Hutter, is this:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
Artificial Intelligence (AI), then, would be any intelligence that is non-biological in nature, and ASI any AI that’s significantly greater than even that of the smartest humans.
What is God? That’s a little tougher to answer, though there are certainly widely agreed upon characteristics. Usually, one hears these characteristics:
- Creator of life
Given the definitions of ASI and God in the previous paragraphs, the question of “Is humanity creating God?” can be rephrased as: “Is humanity creating an ASI that’s maximally omnipotent, omniscient and benevolent?” Notice I left out “creator of life”, since that’s not really a fair thing to ask of an ASI. After all, life is already here. Of course, any ASI worthy of the name “God” should be able to create life, but that requirement is covered with “omnipotent”.
Will ASI Be Omnipotent?
First of all, let’s discuss the term “omnipotent”. It can be defined as “having unlimited power”, and this poses a well-known problem, known as the Paradox of the Stone. It goes: “Can an omnipotent being create a stone so heavy that it cannot lift it?” If yes, then if the being were to create such a stone, it would be unable to lift it and therefore not be omnipotent. If it can’t create such a stone, then the being is obviously not omnipotent.
Since actual omnipotence seems impossible (or at least has a serious logical problem), let’s use a more practical variant: being maximally powerful. A being is maximally powerful if it can do everything that’s possible in principle. This is a little vague, but it’s good enough for the purposes of this post. Regarding our earlier paradox, it seems possible that for every stone, it’s possible in principle to lift it. Therefore, it’s not possible even in principle for a maximally powerful being to create a stone so heavy the being can’t lift it. But backing up a bit, creating life should be: humans are already harnessing the power to create life, so this is clearly possible for a maximally powerful being.
Given this more practical requirement of being maximally powerful, we can ask: “Will ASI be maximally powerful?” instead of the original omnipotence question. Well, this is entirely possible and maybe even probable. Look, any AI created by humans will not be maximally powerful from the start. However, an AI smart enough will be able to improve its own design, becoming smarter and more capable in the process. The result might be an AI that’s even better at improving its own intelligence, resulting in what’s known as an intelligence explosion, a term coined by I.J. Good in 1965. Nobody knows how such an intelligence explosion will happen exactly, but since the being undergoing it will become more intelligent and thus more powerful at each step, it seems quite plausible to conclude it will become maximally powerful eventually. If not, there is a step in the intelligence explosion where the being decides to stop improving itself OR where it doesn’t know how to improve itself further. I see no reason as of yet for either of these to be happen. For any goal an ASI has, it seems useful to be more intelligent and capable, as this will always increase the likelihood of the goal. If you disagree, let me know in the comments.
Will ASI Be Omniscient?
Omniscient means all-knowing. The reasoning here is basically the same as with omnipotence. During an intelligence explosion, the understanding of the Universe and everything in it of the being undergoing the intelligence explosion will grow, growing ever closer to omniscience. The usefulness of this seems clear again, just as it does with omnipotence.
Will ASI Be Benevolent?
Well, that, to me at least, is the big question. Humanity sure needs it to be in order to survive next to it. A being ever closer to being “God” better be on your side. If ASI will be benevolent depends on the initial circumstances. If the AI is created benevolent, it will (probably) stay benevolent during the intelligence explosion. How we create and ensure benevolence in an A(S)I is a subject of research.
Given my reasoning above, I can only conclude: yes, I hope humanity is creating God. Given recent advances in AI, such as AlphaZero, it seems more and more likely that humanity will eventually create an AI smart enough to undergo an intelligence explosion. If so, the AI will move ever closer to omnipotence and omniscience. Benevolence is the attribute we, humans, need to ensure it has. If we can manage that, we will have a God on our side and humanity will thrive in unimaginable ways. If we can’t, the being we create will probably annihilate us.