Dealing with small data sets for Deep Learning.
Data Augmentation is a technique that can be used for making updated copies of images in the data set to artificially increase the size of a training dataset. This technique is very useful when the training data set is very small.
There are already many good articles published on this concept. We can refer to some of these articles at, learn about when to use data augmentation, and other important concepts at data augmentation.
Imagine that you are afraid of Thanos and believe that he is real and will visit Earth one day. As a token of measure, you want to build a defense system that feeds on camera input. The system is meant to be activated when Thanos arrives on Earth by classifying his image from the camera feed. To do that we need to train a reliable model for activating the defense system. If we have only 10 pictures of Thanos its very difficult to build a reliable model that can capture his presence.
1. AI for CFD: Intro (part 1)
2. Using Artificial Intelligence to detect COVID-19
3. Real vs Fake Tweet Detection using a BERT Transformer Model in few lines of code
4. Machine Learning System Design
So to have multiple pictures for training sets we can consider Data Augmentation. Better examples and scenarios of when to use augmentation are mentioned at click here. Let us consider the below image is the one for which we want to perform Data augmentation.
In this article, I’m going to solely concentrate on the coding part of Data Augmentation.
At first, we will look at, how this can be done using NumPy, and then we will discuss the image preprocessing Data Augmentation class in Keras that brings simplicity for this task.
Importing required modules.
Loading an image to work on.
Cropping: with cropping, we can capture the required parts of the images. Here we are cropping at random to capture random windows of the images. Cropping too small images from the original image can cause information loss.
Rotating Images: rotating the images to capture the real-time effect of capturing pictures at different angles.
Image Shifting or otherwise called Image translation: this is nothing but shifting pixels of a picture in some direction and adding back the shifted pixels back in the opposite direction.
For better results, we can combine some of these techniques, as we will get augmented pictures of different styles.
We have seen that using NumPy takes a lot of effort to manually change the values of the image array which is both computationally expensive and requires a lot of code as mentioned above.
Now, we can try augmentation using the Keras Neural Network framework, which makes our job a lot easier.
TensorFlow has a separate class which deals with data augmentation with a lot of different options rather than just flipping, zooming, and cropping the images.
By using Keras, there is no need for manual adjustment of pixels. Keras has inbuilt functions that take care of these things. So the code required for augmentation with Keras is way less along with multiple options.
Let us look at image prepossessing ImageDataGenerator class of Keras:
Let’s look at important arguments that are used for common data argumentation techniques:
- rotation_range: Int. Degree range for random rotations.
- width_shift_range: Float, 1-D array-like or int — a fraction of total width
- height_shift_range: Float, 1-D array-like or int — a fraction of total height
- brightness_range: Tuple or list of two floats. The range for picking a brightness shift value from.
- shear_range: Float. Shear Intensity (Shear angle in the counter-clockwise direction in degrees)
- zoom_range: Float or [lower, upper]. The range for random zoom. If a float, [lower, upper] = [1-zoom_range, 1+zoom_range]. A fraction of the total image to be zoomed.
- horizontal_flip: Boolean. Randomly flip inputs horizontally.
- vertical_flip: Boolean. Randomly flip inputs vertically.
- rescale: rescaling factor. Defaults to None. If None or 0, no rescaling is applied, otherwise we multiply the data by the value provided (after applying all other transformations).
- preprocessing_function: a function that will be applied to each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3) and should output a Numpy tensor with the same shape.
- data_format: Image data format, either “channels_first” or “channels_last”.
- validation_split: Float. The fraction of images reserved for validation (strictly between 0 and 1).
- dtype: Dtype to use for the generated arrays.
For more details and arguments please check out the tf documentation.
Now, we will augment our images with some of the most common techniques like flipping, rotation, width and height shifting, varying brightness of the image, zooming, and re-scaling the images.
Now let’s look at how to augment a complete data set. We will consider the cifar10 data set.
We can notice from the above examples, it is better to use Keras for data augmentation than using NumPy.
Hope these large set of augmented images can help you to activate your defense system and save our planet.
The complete Jupiter notebook can be found at my git hub.
This is my first article, please provide feedback on how to improve my articles from here on.