*During the COVID-19 pandemic, it is important to remember not to panic, but to be precautious and stay safe. Please be sure to wash your hands thoroughly and properly, and practice social distancing at this time. Stay safe we can fight this ❤️*

With the global pandemic increasing in scale and spreading quickly, it is very important for us to stay precautious, safe, and healthy. There needs to be an efficient way to make diagnosis times easier for doctors, nurses and medical professionals so that they can focus on treatment immediately, and one certain way is to use neural networks and deep learning to find a solution to this problem for us.

A ** Neural Network**, or a Neural “Net”, is a deep learning model which sort of acts like a real biological neural network. It is a system which learns a specific pattern by taking in examples, without being explicitly programmed to achieve the task. To build or implement a neural network through code (for example, Python), there are 4 steps to essentially follow:

*1. Set the architecture of the model , 2. Compiling the model, 3. Fitting the model, 4. Predicting with the model*.

Before we delve into the four step process, lets take a look at how a neural network looks like: A neural network is composed on **neurons (or nodes)** and **layers**.

To the left is a diagram of a neural network. The circles represent the neurons/nodes of the system, where data is stored. And the lines connect one node or neuron to another neuron.

There are different layers of a neural network: the first layer Is the **input layer,** which takes in the inputs or features from our specific dataset. The next layer or layers (until the last layer) is/are called **hidden layer(s)**, and the last layer is the **output layer**.

A neural network essentially first takes in all of the inputs, maps them to the neurons in the first layer, and then maps those nodes to the next layer by using something called **weights**. Each line connecting from one node to another node has a certain weight to it. To calculate the value of the next layer, we multiply every previous node value with their corresponding weight, and sum all of the values from the previous layer. So for example, the value of the next node of the second layer ( **shown in the second picture to the left**) is essentially a sum of all the products of the previous input values and the weights. The weights symbolize how much importance or influence does a particular feature variable of input have on the entire model.

This computation happens until we reach the output layer, which decides which target the data belongs to (predicting the target for the inputs). The output layer consists of the number of possible targets; for example if we needed to classify whether a digit drawing was 0, 1, 2, 3, 4, 5..9, we would have 10 different options for the output, or 10 nodes in the output layer. The neural network would then make a decision by computing and moving forward through the network.

This process is called **forward propagation**. The neural network makes predictions using the process of forward propagation. It takes in the input feature values as the input layer, maps the target values at the output layer, and performs the computations, until it leads to some output target value. Now, what if the network makes a prediction which is not accurate? What if the prediction the network makes is far from the actual value? This is where we need to update the weights so that the values are more closer and accurate with the actual target values. In order to achieve this, the network goes backwards from the output layer to the previous hidden layer(s) (all the way up to the input layer) and updates the weights using an **optimizer **so that the next predictions are more accurate.

An optimizer is essentially an algorithm which helps us minimize the error of the predicted vs the actual target value. Every time we train our network, our goal is to minimize the loss so that the neural network can accurately predict the next set of data. When we plot our losses for all possible weights, we have to ensure our loss is at its minimum value. In order to find the minimum value loss, we need to look for the minima of the function, or where the slope is 0. The optimizer updates the weights by using a **learning rate, **which we use to change the weight accordingly.

To build a neural network, we first set the **whole architecture** or the structure of the **neural network. **We first import the important libraries we need to build this network. The main library we require is the keras library, which is a neural network library in Python.