Modern computers have become incredibly powerful, Nick Bostrom became famous for his thoughts on superintelligent artificial machines and even simulations of human brains seem to be in reach. These futuristic ideas raise fundamental questions about humanity and our relation to intelligent machines. A philosophical approach leads to the question of whether machines are conscious or not. And the answer is: They can be. It might not be in a way we as humans define our consciousness, but we cannot ignore the idea that machines can have their own concept of consciousness.
Often consciousness is defined as the role of emotional states and physical embodiment. More generally speaking, it is the awareness of oneself and its surroundings. Especially the notion of emotional states often leads to the conclusion that consciousness is uniquely human and related to biological tissue. This naturalistic view is fundamentally questionable since we do not have a way to objectively prove the consciousness of other human beings either. It therefore is a fallacy to claim humans have consciousness and machines do not. To find answers to this eclectic questions we have to address the concept of consciousness in a broader sense.
Trending AI Articles:
1. From Perceptron to Deep Neural Nets
2. Neural networks for solving differential equations
3. Turn your Raspberry Pi into homemade Google Home
4. Keras Cheat Sheet: Neural Networks in Python
First and most importantly, it is misleading to deny the existence of machine consciousness by looking at its mechanics. A better approach is to look at another level of abstraction. For humans we distinguish between the physical brain and the non-physical mind. Even by assuming there is a relation between mind and brain we allocate the concept of consciousness to the mind. The neurons in the human brain are generally not expected to have emotional states itself or to be aware of their physical embodiment. The same idea can hold for machines. By looking at transistors and their functionality a potentially existing consciousness cannot be understood. Equally to body and mind there is a physical computer and there might also be something like a non-physical representation of itself and its surroundings. It is most likely not comparable to the human mind, but it does not have to be human-like to satisfy the concept of consciousness. Therefore, the machine’s equivalent of the mind and its consciousness might just be something beyond our imagination. Further, the unavailability of appropriate communication between humans and machines and a lack of imagination cannot justify its nonexistence.
John Searle and other naturalists might be right in their criticism on machine intelligence. They argue that no evidence has been found so far proving that a machine is capable of duplicating human thinking and it will never be able to understand thinking on a level of human abstraction. These conclusions are undoubtedly right for weak artificial intelligence (AI) and modern AI systems, but the concept of strong AI however is different.
Instead of guiding the computer in its learning process it is given the ability to interact with its environment and will then learn from these interactions. This so-called Seed AI is therefore by design capable of self-instructed learning and best understood by seeing it as the machine equivalent to a human baby. Both begin without any representation of the environment and itself but will then structure their inputs, formulate goals and improve themselves according to their goals and their perception of the world. Since the existence of such a Seed AI cannot be disproved it has to be taken into consideration. If this AI is capable of developing intellectual capability functionally equal to human’s it must somehow have a representation of its physical embodiment. This artificial intelligence must then be conscious because it did not try to emulate human intelligence nor was it given any human concept of thinking and behavior but found its representation of itself and its surroundings by itself.
Finally, even if one is convinced that the concept of consciousness is linked to the concept of emotional states and that it therefore must be solely human, some current progress in AI research will question this argument. Prominent figures in science are convinced that brain simulations will be possible. If neurons and its interactions were simulated completely and the neural network of a human brain could be scanned it could basically be simulated on a computer. Besides the ethical questions this technology raises, it has a fundamental impact on the discussion of machine consciousness. Again, assuming that the mind arises from the brain it must be concluded that consciousness does as well and therefore as a direct consequence of brain simulations, the simulating computer must be conscious. Although the simulation itself will never be aware of itself and its surrounding the whole system will still be conscious. The presented reasoning is therefore consistent with the argument above.
Today’s trends in technology fundamentally question the beliefs of those with naturalistic approaches to artificial intelligence. Taking current innovations into consideration it must be concluded that machines can be conscious. We do not know in which form this consciousness will be prevalent. It could be an anthropomorphic concept of consciousness but it might also lay beyond our imagination. Strong artificial intelligence is therefore probably best seen as something like the second Scientific Revolution. Before the first Scientific Revolution in the 16th and 17th century we thought that we humans on earth were the center of the universe. Today we have to question if we will be the most intelligent conscious species in the universe forever. The biggest difference compared to the first Scientific Revolution is just that we are creating this change ourselves and we better have a sound concept of consciousness ready before we have to deal with it.