Very, very briefly.
Technology and the brain are very closely related in these days. Modern computer applications take into account the features of human brains (in marketing, for example), and human brains take into account the features of technologies (if you need the direction… no worries, there’s Google Maps).
Basically, a neuron is just a node with many inputs and one output. A neural network consists of many interconnected neurons. In fact, it is a “simple” device that receives data at the input and provides a response. First, the neural network learns to correlate incoming and outcoming signals with each other — this is called learning. And then the neural network begins to work — it receives input data, generating output signals based on the accumulated knowledge.
Most likely, the initial evolutionary task of a neural network in nature was to separate the signal from noise. “Noise” is random and difficult to build into a pattern. A “signal” is a surge (electrical, mechanical, molecular), something that is already by no means random. Now, neural systems in technology (that is — along with biological) have already learned not only how to isolate a signal from noise, but also to create new levels of abstraction in identifying different states of the world around. That is, not just to take into account the factors designated by the programmers, but to identify these factors by themselves.
Currently, there are two areas of study of neural networks.
- Creation of computer models that faithfully repeat the functioning models of neurons of the real brain. It makes possible to explain both the mechanisms of real brain operation and learn the diagnosis/treatment of diseases and injuries of the central nervous system better. In ordinary life, for example, it allows us to learn more about what a person prefers (by collecting and analyzing data), to get closer to the human creating more personalized interfaces, etc.
- Creation of computer models that abstractly repeat the functioning models of neurons of the real brain. It makes possible to use all the advantages of the real brain, such as noise immunity and energy efficiency, in the analysis of large amounts of data. Here, for example, deep learning is gaining popularity.
Like the human brain, neural networks consist of a large number of related elements that mimic neurons. Deep neural networks are based on such algorithms, due to which computers learn from their own experience, forming in the learning process multi-level, hierarchical ideas about the world.
The architecture of the British Deepmind programs, according to one of the co-founders, is based on the functioning principles of the brain of different animals. Having worked in the game industry, he went to get a doctorate in MIT and studied how autobiographical memory works, how hypothalamus damages cause amnesia. The head of Facebook AI Reasearch also sees the future of machine learning in the further study of the functioning principles of living neural systems and their transfer to artificial networks. He draws such an analogy: we are not trying to make mechanical bats, but we are studying the physical laws of airflow around the wing while building airplanes — the same principle should be used to improve neural networks.
Deep learning developers always take into account the human brain features — construction of its neural networks, learning and memory processes, etc, trying to use the principles of their work and modeling the structure of billions of interconnected neurons. As a result of this, Deep learning is a step-by-step process similar to a human’s learning process. To do this, it is necessary to provide a neural network with a huge amount of data to train the system to classify data clearly and accurately.
In fact, the network receives a series of impulses as the inputs and gives the outputs, just like the human brain. At each moment, each neuron has a certain value (analogous to the electric potential of biological neurons) and, if this value exceeds the threshold, the neuron sends a single impulse, and its value drops to a level below the average for 2–30 ms (an analog of the rehabilitation process in biological neurons, so-called refractory period). When out of the equilibrium, the potential of the neuron smoothly begins to tend to the average value.
In general, deep learning is very similar to the process of human learning and has a phased process of abstraction. Each layer will have a different “weighting”, and this weighting reflects what was known about the components of the images. The higher the layer level, the more specific the components are. Like the human brain, the source signal in deep learning passes through processing layers; further, it takes a partial understanding (shallow) to a general abstraction (deep), where it can perceive the object.
An important part of creating and training neural networks is also the understanding and application of cognitive science. This is a sphere that studies the mind and the processes in it, combining the elements of philosophy, psychology, linguistics, anthropology, and neurobiology. Many scientists believe that the creation of artificial intelligence is just another way of applying cognitive science, demonstrating how human thinking can be modeled in machines. A striking example of cognitive science is the Kahneman decision-making model, determining how a person makes a choice at any given moment — consciously or not (now often used in marketing AI).
At the moment, the biggest challenges to use deep learning lie in the field of understanding the language and conducting dialogs — systems must learn to operate the abstract meanings that are described semantically (creativity and understanding the meaning of a speech). And yet, despite the rapid development of this area, the human brain is still considered the most advanced “device” among neural networks: 100 trillion synaptic connections, organized into the most complex architecture.
Though, scientists believe that in the next half-century (forecasts vary greatly — from 10 to 100 years), the Universe will be able to step towards artificial neural networks that exceed human capabilities.
Stay tuned 🙂