The advancement of new technologies progresses at a staggering rate; so much so that it seems like in the blink of an eye the newest state-of-the-art becomes yesterday’s news. This holds more true for artificial intelligence, known as A.I., that has seen rapid development and huge successes within the past decade alone. From smartphones to self-driving cars, technology is consistently pushing the boundaries of what we believed to be previously impossible into tangible reality with real-world applications, and this advancement only continues. But will it continue to progress to the point where it can read our minds?
Many of these advancements come to be through work done progressing the power of deep learning. Deep learning is a subset of machine learning that structures its model architecture in an artificial neural network manner, similar to the human brain. In short, it works by utilizing layers of nodes that, like neurons in the human brain, pass information from one to the next and later updates its understanding by looking back on its mistakes.
With deep learning, it’s often the case that there are thousands and thousands of nodes linked up to each other through multiple layers. As such, it can be difficult to understand how the algorithm decided to do what it did.
With deep learning algorithms there has been significant progress in the training of artificial neural networks to perceive as humans do. In fact, A.I. is now at the point where it is possible to interpret brain signals and create speech. In the case of a recent story, a paralyzed man who could not speak, nicknamed Pancho in the study to protect his privacy, was able to communicate as a machine decoded his brain signals and converted them into text. Electrical sensors were placed on the surface of his brain to detect and pick up the brain signals of the words that Pancho intended to speak. These brain signals were then passed to an algorithm that mapped those signals to words.
Even amidst this huge success being said, deep learning at the current time isn’t so advanced to be able to perform perfectly. Even for Pancho and the team developing the deep neural net for this project, the algorithm is able to achieve a speaking rate of about 15 words per minute, with approximately 47% accuracy rate when Pancho was simply thinking of individual words. That is on top of the constraint that the vocabulary is limited to 50 words, which isn’t exactly ideal for a full conversation yet.
Nonetheless, although the results at the current time are not completely accurate or incredibly fast, it is a work in progress that gives hope to many to communicate with their loved ones without having to write out their thoughts.
1. Why Corporate AI projects fail?
2. How AI Will Power the Next Wave of Healthcare Innovation?
3. Machine Learning by Using Regression Model
4. Top Data Science Platforms in 2021 Other than Kaggle
Another one of the most complex problems of deep learning that is being looked at today is computer vision. Broadly speaking, its about the idea of training computers to see things the way people do; ranging from simple images online to robots that can see and interact with the physical world.
There is a world of attention that is focusing on the field of computer vision today. We have all heard of the talks around self-driving cars and how far they have progressed, but it doesn’t have to be as high-tech as that. Even our smartphones utilize computer vision ideas such as when we use our face to unlock our phones. The very concept of being able to pass images or pictures or even videos to a computer and have it understand, process, and do something with that information is a very sleek, beautiful idea, which is why it has garnered all the attention that it has.
Some of the recent work around solving computer vision has made astounding leaps as well. Akin to decoding brain signals to text, those brain signals can also be decoded into images. A study has shown that it was possible to measure brain activity using functional magnetic resonance imaging (fMRI) and then decode that information. Those fMRI measured brain activities were run through a pre-trained deep neural network in order to reconstruct images that the brain was thinking about.
As we can see from the picture, the current technology is still quite far from perfectly reproducing reconstructed images from the brain. However, the fact that it has been shown to be possible with relatively recognizable features is nothing short of impressive.
Another study looks at electroencephalography (EEG) tests for electrical activity in the brain and is able to decipher information from that. Participants were made to look at facial images and asked to concentrate on certain features. The EEG’s of the participants produced were then used as inputs to a neural network which was then able to estimate the face that the participants were thinking of with an 83% accuracy.
Given all the recent successes of the ability of deep learning networks to decipher human brain activity, it is not a question of whether or not if can machines read minds at all and more about how long until it is near perfect. The implications of this imminent technology is massive, which could change how we practice in fields such as law, medicine, business, education, and so on. With this introduction to new deep learning technologies coming to an end, now is the time to start thinking about real use cases, so that we can arrive at ethical solutions that does not compromise or abuse the safety or privacy of anyone.