Sunday, March 7, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Tutorial On Keras CallBacks, ModelCheckpoint and EarlyStopping in Deep Learning

August 9, 2020
in Machine Learning
Tutorial On Keras CallBacks, ModelCheckpoint and EarlyStopping in Deep Learning
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

In Deep Learning models Keras callbacks functions can play a very significant role. The training of such models can take even days to complete so we should have some function to monitor and control our model. Suppose, if the model is getting overfitted we can stop the training or if we have reached at least loss and for next epoch, it gets increased we can again stop the training. Sometimes due to much complexity in deep learning models, they often get crashed and the training gets stopped. Consider you have already trained it for 3 days and all the training gets wasted. To overcome these kinds of situations Keras has several different callbacks functions that can help to get rid of these problems while training the model. 

In this article, we will explore different Keras callbacks functions. We will build a deep neural network model for a classification problem where we will use different callback functions while training the model. For this experiment, we will make use of a Boston Housing Dataset which is publicly available on Kaggle for downloading. We will directly import the data set as it is available in Keras.

You might also like

Why do Machine Learning strategies fail and how to deal with them?

Enhance your gaming experience with this sound algorithm software

How Optimizing MLOps can Revolutionize Enterprise AI

What will we learn from this article? 



  • Building Deep neural network 
  • Keras Callbacks
  • Visualising loss and accuracy while training 
  • ModelCheckpoint
  • EarlyStopping 
  • Learning Rate Scheduler

The Dataset

We are using a Boston Housing data set that consists of 506 rows and 14 columns. The data was also part of the UCI Machine Learning Repository. The data contains information about different houses in Boston. The goal is to build a model that is capable of predicting the prices of houses based on given features in the data set.

Building Deep Neural Network 

First, we will import the required library and the data set. As we are directly importing the data set from Keras. It returns us the data into a training and testing set. We have stored them in training and testing variables accordingly. After this, we checked about the shape of the data and found there are 404 samples in training and 102 in testing. Use the below code to do all this.



import tensorflow as tf

(X_train,y_train),(X_test,y_test)= tf.keras.datasets.boston_housing.load_data()

print(X_train.shape)

print(X_test.shape)

Output: 

Model

Now we will define our network by adding different layers. We have first defined the model to be sequential followed by batch normalization layers for normalizing the inputs. We then defined the dense layer that will give us the output. (Price of the house). We have then compiled the model using optimizer as stochastic gradient descent and loss as mean squared error. Use the below code to define the network.

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.BatchNormalization(input_shape=(13,)))

model.add(tf.keras.layers.Dense(1))

model.compile(optimizer='sgd', loss='mse')

After this, we fit the training and validation data over the model and start the training of the network. We have stored the training in a history object that stores the different values while the model is getting trained like loss, accuracy, etc for each epoch. We have defined epochs to be 30. Use the below code for training the network.

history = model.fit(X_train,y_train, validation_data=(X_test,y_test), epochs=30)

Output

Once the training is done we will see what is present in history objects. Use the below code to check that.

print(history.history.keys())

Output

print(history.history)

Output:

Keras CallBacks

As we can see history object stored loss and validation loss for each epoch now let’s visualize it using graph

Visualizing loss and validation loss while training

import matplotlib.pyplot as plt

plt.plot(history.history['val_loss'])

plt.plot(history.history['loss'])

plt.xlabel('Loss')

plt.ylabel('Iterations')

plt.show()

Output

See Also


Keras CallBacks

ModelCheckpoint

This function of keras callbacks is used to save the model after every epoch. We just need to define a few of the parameters like where we want to store, what we want to monitor and etc. 

Use the below to code for saving the model. We have first defined the path and then assigned val_loss to be monitored, if it lowers down we will save it. We will again train the network now.

filepath='/content/drive/My Drive/All ss'

from keras.callbacks import ModelCheckpoint

checkpoint = ModelCheckpoint(filepath,monitor='val_loss',mode='min',save_best_only=True,verbose=1)

callbacks_list = [checkpoint]

model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=15,batch_size=32, callbacks=checkpoint)

Output:  Respective epoch where the validation loss gets lower down the model automatically gets saved at the respective path. You can refer to the below screenshot of the training.

Keras CallBacks

EarlyStopping 

This function of Keras callbacks is used to stop the model training in between. This function is very helpful when your models get overfitted. It is used to stop the model as soon as it gets overfitted. We defined what to monitor while saving the model checkpoints. We also need to define the factor we want to monitor while using the early stopping function. We will monitor validation loss for stopping the model training. Use the below code to use the early stopping function. 

from keras.callbacks import EarlyStopping

earlystop = EarlyStopping(monitor = 'val_loss',min_delta = 0,patience = 3, verbose = 1,restore_best_weights = True)

Keras CallBacks

As we can see the model training has stopped after 10 epoch. This is the benefit of using early stopping. 

Learning Rate Scheduler 

This is a very simple function of callback that can be used to tweak the learning rate over a while. This is scheduled before the training. This gives us the desired output based on the respective epoch. Use the below code to use the learning rate scheduler. 

from keras.callbacks import LearningRateSchedulerscheduler = LearningRateScheduler(schedule, verbose=0)

Conclusion 

I will conclude the article by stating that Keras callback is a very efficient function that is used while training the model to compute the performance of the model. We have discussed Early Stopping, Learning Rate Scheduler, Model Checkpoint. You can also explore Tensorboard here in this article titled as “TensorBoard Tutorial – Visualise the Model Performance During Training”.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.


Credit: Google News

Previous Post

Donald Trump signs executive orders banning TikTok and WeChat

Next Post

Nine in ten Americans view data privacy as a human right, according to new report

Related Posts

Why do Machine Learning strategies fail and how to deal with them?
Machine Learning

Why do Machine Learning strategies fail and how to deal with them?

March 7, 2021
Enhance your gaming experience with this sound algorithm software
Machine Learning

Enhance your gaming experience with this sound algorithm software

March 7, 2021
How Optimizing MLOps can Revolutionize Enterprise AI
Machine Learning

How Optimizing MLOps can Revolutionize Enterprise AI

March 6, 2021
Facebook enhances AI computer vision with SEER
Machine Learning

Facebook enhances AI computer vision with SEER

March 6, 2021
Hands-on Guide to Interpret Machine Learning with SHAP –
Machine Learning

Hands-on Guide to Interpret Machine Learning with SHAP –

March 6, 2021
Next Post
Nine in ten Americans view data privacy as a human right, according to new report

Nine in ten Americans view data privacy as a human right, according to new report

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

CISA issues emergency directive to agencies: Deal with Microsoft Exchange zero-days now
Internet Security

CISA issues emergency directive to agencies: Deal with Microsoft Exchange zero-days now

March 7, 2021
Why do Machine Learning strategies fail and how to deal with them?
Machine Learning

Why do Machine Learning strategies fail and how to deal with them?

March 7, 2021
Linux distributions: All the talent and hard work that goes into building a good one
Internet Security

Linux distributions: All the talent and hard work that goes into building a good one

March 7, 2021
Enhance your gaming experience with this sound algorithm software
Machine Learning

Enhance your gaming experience with this sound algorithm software

March 7, 2021
Check to see if you’re vulnerable to Microsoft Exchange Server zero-days using this tool
Internet Security

Check to see if you’re vulnerable to Microsoft Exchange Server zero-days using this tool

March 7, 2021
How Optimizing MLOps can Revolutionize Enterprise AI
Machine Learning

How Optimizing MLOps can Revolutionize Enterprise AI

March 6, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • CISA issues emergency directive to agencies: Deal with Microsoft Exchange zero-days now March 7, 2021
  • Why do Machine Learning strategies fail and how to deal with them? March 7, 2021
  • Linux distributions: All the talent and hard work that goes into building a good one March 7, 2021
  • Enhance your gaming experience with this sound algorithm software March 7, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates