What is the big buzz about machine learning, deep learning, AI, and stuff?

To understand this, lets move a little off topic.

We do know that most of human inventions, discoveries or innovations are inspired by nature, for example, flights, inventors analyzed birds’ flight with a desire to fly high in the sky like them. Studying the whales, scientists came up with an idea to invent the submarine. Invention of sonar system was inspired from bats and dolphins that use echolocation for navigation.

Similarly, deep learning is inspired from the human brain. Just as how human brain learns through trial and errors i.e., gaining knowledge by studying, experience, making mistakes, or being taught, deep learning deals with algorithms that aid machines with intelligence without explicit programming.

Let’s get to know about the biological neuron before diving into the perceptron.

- The
**dendrites**are responsible for receiving the information from other neurons it is connected to. The dendrites connect with other neurons through a gap called the**synapse**that assigns a weight to a particular input. - The
**Soma**is the cell body of the neuron and is responsible for the processing of information that is received. - The
**Axon**is just like a cable through which the neuron sends the output to the**axon terminals**. These axon terminals are connected to the dendrites of other neurons through the synapse.

So to put it all together, the neuron takes some binary inputs through the dendrites, but not all inputs are treated the same since they are weighted. If the combination of these inputs exceeds a certain threshold, then an output signal is produced, i.e., the neuron “fires.” but if the combination falls short of the threshold, then the neuron doesn’t produce any output, i.e., the neuron “doesn’t fire”. When the neuron fires, this single output travels along the axon to other neurons.

Fact: Total Connections in the Human Brain

Researchers estimate there are more than 500 trillion connections between neurons in the human brain. Even the largest artificial neural networks today doesn’t come close to approaching this number.

Perceptrons were developed in the 1950s and 1960s by the scientist Frank Rosenblatt. The motivation behind the perceptron was the biological neuron in the human brain.

So how do perceptrons work?

A perceptron takes several binary inputs, x1,x2,… and produces a single binary output, as shown in the figure. Then Rosenblatt proposed a simple rule to compute the output. He introduced *weights*, w1,w2,… real numbers expressing the importance of the respective inputs to the output. The neuron’s output, 0 or 1, is determined by whether the weighted sum is less than or greater than some threshold value. Just like the weights, the threshold is a real number which is a parameter of the neuron. To put it in more precise algebraic terms:

So why do we actually need weights? Let’s take an example, suppose you’re deciding whether or not to apply to a university. What are the factors you would consider to make the decision?

- Is the course available?
- Tuition fee costs?
- Is on-campus accommodation available?
- Living costs? So on and so forth.

So among these, the first factor is the most important, if your course is not offered by the university, you would not consider applying it, and the on-campus accommodation factor would not be the most important one because you could always find an alternative. You would give more importance to the course availability than food or accommodation. So any decision-making process involves weighing up the inputs that are required to make the decision. Hence, we can say that perceptron is a device that makes decisions by weighing up the inputs. By varying the weights and the threshold, we can get different models of decision-making.

Obviously, the perceptron isn’t a complete model of human decision-making! But what the example illustrates is how a perceptron can weigh up different kinds of inputs in order to make decisions.

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

The perceptron is an iterative algorithm that strives to determine the best set of values for the vector “w” for a data sample. When it arrives at the best values, it then decides which class this sample belongs to, by simply multiplying the numeric value matrix “x” of the data sample with the weight vectored it just obtained, and if the prediction is wrong, it tweaks itself to make a more informed prediction next time. It becomes more accurate over thousands of iterations.

The downside of the perceptron is that, when the data is non linearly separable, i.e., if you cant divide the data into classes spread over two or more dimensions by any line or plane, the perceptron algorithm couldn’t find a solution and it tends to keep updating its “w” infinitively. However, scientists then discovered that even though a single perceptron couldn’t perform well with classes that aren’t linearly separable, two neurons put together were able to achieve it. This was tried with the simple XOR operation.

The OR and AND logical functions were learned using 2 different neurons and then combined for the XOR function.

The core neural network component is the neuron (also called a unit). Many neurons arranged in an interconnected structure make up a neural network, with each neuron linking to the inputs and outputs of other neurons. Thus, a neuron can input features from examples or the results of other neurons, depending on its location in the neural network.

The perceptron laid the foundation for the deep neural networks we have today. When psychologist Rosenblatt conceived the idea of perceptron, he thought of it as a simplified mathematical version of a brain neuron. He said, “perceptron is the first machine which is capable of having an original idea,”.

“He wanted to ask himself what are the minimum number of things that a brain has to have physically in order to perform the amazing things it does. He lifted the veil and enabled us to examine all these possibilities, and see that ideas like this were within human grasp.”

— Richard O’Brien, former director of Cornell’s Division of Biological Sciences