Image Classification is one of the fundamental supervised tasks in the world of machine learning. TensorFlow’s new 2.0 version provides a totally new development ecosystem with Eager Execution enabled by default. By me, I assume most TF developers had a little hard time with TF 2.0 as we were habituated to use tf.Session
and tf.placeholder
that we can’t imagine TensorFlow without.
Today, we start with simple image classification without using TF Keras, so that we can take a look at the new API changes in TensorFlow 2.0
You can take a look at the Colab notebook for this story.
We need to play around with the low-level TF APIs rather than input pipelines. So, we import a well-designed dataset from TensorFlow Datasets directly. We will use the Horses Or Humans dataset.
We can get a number of datasets readily available with TF Datasets.
Remember what we needed for a CNN in Keras. Conv2D
, MaxPooling2D
, Flatten
and Dense
layers, right? We need to create these layers using the tf.nn
module.
Also, we would require some weights. The shapes for our kernels ( filters ) need to be calculated.
Note the
trainable=True
argument becomes necessary withtf.Variable
. If not mentioned then we may receive an error regarding the differentiation of variables. In simpler words, a trainable variable is differentiable too.
Each weight is a tf.Variable
with the trainable=True
parameter which is important. Also, in TF 2.0, we get the tf.initializers
module which makes it easier to initialize weights for neural networks. We need to encapsulate our weights in a weights
array. This weights
array will be used with the tf.optimizer.Adam
for optimization.
Now, we assemble all the ops together to have a Keras-like model.
Q. Why are declaring the model as a function? Later on, we will pass a batch of data to this function and get the outputs. We do not use
Session
as Eager execution is enabled by default. See this guide.
The loss function is easy.
def loss( pred , target ):
return tf.losses.categorical_crossentropy( target , pred )
Next, comes the most confusing part for a beginner ( for me too! ). We will use tf.GradientTape
for optimizing the model.
What’s happening here?
- We declare
tf.GradientTape
and within its scope, we call themodel()
andloss()
methods in it. Hence, all the functions in these methods will be differentiated during backpropagation. - We obtain the gradients using
tape.gradient
method. - We optimize all the ops using the
optimizer.apply_gradients
method ( Earlier we usedoptimizer.minimize
which is still available )
Read more about it from here.
Explore our curated Colab notebooks on machine learning with TensorFlow.
tf.Session
and tf.placeholder
This story was a refresher for TF 1.x developers. I had personally faced a number of problems while implementing the code you’ll find in the notebook. Feel free to share your doubts and feedback. Happy Machine Learning!
Credit: BecomingHuman By: Shubham Panchal