Saturday, March 6, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Neural Networks — Relation to Human Brain and Cognition

August 15, 2019
in Neural Networks
Neural Networks — Relation to Human Brain and Cognition
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Very, very briefly.

Technology and the brain are very closely related in these days. Modern computer applications take into account the features of human brains (in marketing, for example), and human brains take into account the features of technologies (if you need the direction… no worries, there’s Google Maps).

You might also like

Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021

5 Tech Trends Redefining the Home Buying Experience in 2021 | by Iflexion | Mar, 2021

Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021

Basically, a neuron is just a node with many inputs and one output. A neural network consists of many interconnected neurons. In fact, it is a “simple” device that receives data at the input and provides a response. First, the neural network learns to correlate incoming and outcoming signals with each other — this is called learning. And then the neural network begins to work — it receives input data, generating output signals based on the accumulated knowledge.

Most likely, the initial evolutionary task of a neural network in nature was to separate the signal from noise. “Noise” is random and difficult to build into a pattern. A “signal” is a surge (electrical, mechanical, molecular), something that is already by no means random. Now, neural systems in technology (that is — along with biological) have already learned not only how to isolate a signal from noise, but also to create new levels of abstraction in identifying different states of the world around. That is, not just to take into account the factors designated by the programmers, but to identify these factors by themselves.

Currently, there are two areas of study of neural networks.

  1. Creation of computer models that faithfully repeat the functioning models of neurons of the real brain. It makes possible to explain both the mechanisms of real brain operation and learn the diagnosis/treatment of diseases and injuries of the central nervous system better. In ordinary life, for example, it allows us to learn more about what a person prefers (by collecting and analyzing data), to get closer to the human creating more personalized interfaces, etc.
  2. Creation of computer models that abstractly repeat the functioning models of neurons of the real brain. It makes possible to use all the advantages of the real brain, such as noise immunity and energy efficiency, in the analysis of large amounts of data. Here, for example, deep learning is gaining popularity.

Like the human brain, neural networks consist of a large number of related elements that mimic neurons. Deep neural networks are based on such algorithms, due to which computers learn from their own experience, forming in the learning process multi-level, hierarchical ideas about the world.

The architecture of the British Deepmind programs, according to one of the co-founders, is based on the functioning principles of the brain of different animals. Having worked in the game industry, he went to get a doctorate in MIT and studied how autobiographical memory works, how hypothalamus damages cause amnesia. The head of Facebook AI Reasearch also sees the future of machine learning in the further study of the functioning principles of living neural systems and their transfer to artificial networks. He draws such an analogy: we are not trying to make mechanical bats, but we are studying the physical laws of airflow around the wing while building airplanes — the same principle should be used to improve neural networks.

Deep learning developers always take into account the human brain features — construction of its neural networks, learning and memory processes, etc, trying to use the principles of their work and modeling the structure of billions of interconnected neurons. As a result of this, Deep learning is a step-by-step process similar to a human’s learning process. To do this, it is necessary to provide a neural network with a huge amount of data to train the system to classify data clearly and accurately.

In fact, the network receives a series of impulses as the inputs and gives the outputs, just like the human brain. At each moment, each neuron has a certain value (analogous to the electric potential of biological neurons) and, if this value exceeds the threshold, the neuron sends a single impulse, and its value drops to a level below the average for 2–30 ms (an analog of the rehabilitation process in biological neurons, so-called refractory period). When out of the equilibrium, the potential of the neuron smoothly begins to tend to the average value.

In general, deep learning is very similar to the process of human learning and has a phased process of abstraction. Each layer will have a different “weighting”, and this weighting reflects what was known about the components of the images. The higher the layer level, the more specific the components are. Like the human brain, the source signal in deep learning passes through processing layers; further, it takes a partial understanding (shallow) to a general abstraction (deep), where it can perceive the object.

An important part of creating and training neural networks is also the understanding and application of cognitive science. This is a sphere that studies the mind and the processes in it, combining the elements of philosophy, psychology, linguistics, anthropology, and neurobiology. Many scientists believe that the creation of artificial intelligence is just another way of applying cognitive science, demonstrating how human thinking can be modeled in machines. A striking example of cognitive science is the Kahneman decision-making model, determining how a person makes a choice at any given moment — consciously or not (now often used in marketing AI).

At the moment, the biggest challenges to use deep learning lie in the field of understanding the language and conducting dialogs — systems must learn to operate the abstract meanings that are described semantically (creativity and understanding the meaning of a speech). And yet, despite the rapid development of this area, the human brain is still considered the most advanced “device” among neural networks: 100 trillion synaptic connections, organized into the most complex architecture.
Though, scientists believe that in the next half-century (forecasts vary greatly — from 10 to 100 years), the Universe will be able to step towards artificial neural networks that exceed human capabilities.

Stay tuned 🙂

Credit: BecomingHuman By: Myroslava Zelenska

Previous Post

Public Transport Victoria in breach of Privacy Act after re-identifiable data on over 15m myki cards released

Next Post

Research consortium builds new AI benchmark for understanding language

Related Posts

Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021
Neural Networks

Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021

March 6, 2021
5 Tech Trends Redefining the Home Buying Experience in 2021 | by Iflexion | Mar, 2021
Neural Networks

5 Tech Trends Redefining the Home Buying Experience in 2021 | by Iflexion | Mar, 2021

March 6, 2021
Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021
Neural Networks

Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021

March 5, 2021
8 concepts you must know in the field of Artificial Intelligence | by Diana Diaz Castro | Feb, 2021
Neural Networks

8 concepts you must know in the field of Artificial Intelligence | by Diana Diaz Castro | Feb, 2021

March 5, 2021
The Examples and Benefits of AI in Healthcare: From accurate diagnosis to remote patient monitoring | by ITRex Group | Mar, 2021
Neural Networks

The Examples and Benefits of AI in Healthcare: From accurate diagnosis to remote patient monitoring | by ITRex Group | Mar, 2021

March 4, 2021
Next Post
Research consortium builds new AI benchmark for understanding language

Research consortium builds new AI benchmark for understanding language

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit
Machine Learning

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit

March 6, 2021
Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021
Neural Networks

Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021

March 6, 2021
Microsoft: We’ve found three more pieces of malware used by the SolarWinds attackers
Internet Security

Microsoft: We’ve found three more pieces of malware used by the SolarWinds attackers

March 6, 2021
Bug in Apple’s Find My Feature Could’ve Exposed Users’ Location Histories
Internet Privacy

Bug in Apple’s Find My Feature Could’ve Exposed Users’ Location Histories

March 6, 2021
Machine learning the news for better macroeconomic forecasting
Machine Learning

Reducing Blind Spots in Cybersecurity: 3 Ways Machine Learning Can Help

March 6, 2021
5 Tech Trends Redefining the Home Buying Experience in 2021 | by Iflexion | Mar, 2021
Neural Networks

5 Tech Trends Redefining the Home Buying Experience in 2021 | by Iflexion | Mar, 2021

March 6, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit March 6, 2021
  • Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021 March 6, 2021
  • Microsoft: We’ve found three more pieces of malware used by the SolarWinds attackers March 6, 2021
  • Bug in Apple’s Find My Feature Could’ve Exposed Users’ Location Histories March 6, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates