A number of different optimizers are available in the
tf.keras.optimizers module. Each optimizer is a variant or improvement to the gradient descent algorithm.
By Imad Dabbura,
By Raimi Karim,
Cost functions ( synonymously called loss functions ) penalize the model for incorrect predictions. It measures how good the model is, and decides how much improvement is needed. Different loss functions have different use-cases, which are detailed thoroughly in the stories below,
A great video from Stanford,
By practice, the learning rate of a NN model should be decreased over time. As the value of the loss function decreases, or as the loss function is close to minima, we take smaller steps. Note, the learning rate decides the step size of gradient descent.
Make sure you explore the Keras docs,
Early stopping is a technique wherein we stop the training of our model when a given metric stops improving. So, we stop the training of our model, before it overfits, thereby avoiding any excessive training which can worsen the results.
By Upendra Vijay,
Batch size is the number of samples present in a mini-batch. Each batch is sent through the NN and the errors are averaged across the samples. So, for each batch, the parameters of the NN are optimized. Also, see Mini-Batch Gradient Descent.
By Kevin Shen,
This could probably clear a common confusion among beginners,
By SAGAR SHARMA,
Metrics are functions whose value is evaluated to see how good a model is. The primary difference between a metric and a cost function is that the cost function is used to optimize the parameters of the model in order to reach the minima. A metric could be calculated at each epoch or step in order to keep track of the model’s efficiency.
By Shervin Minaee,
By Aditya Mishra,
Sometimes, we may require some metrics which aren’t available in the
tf.keras.metrics module. In this case, a custom metric should be implemented, as described in this blog,
Regularization consists of those techniques which help our model prevent overfitting and thereby generalize itself better.
By Richmond Alake,
Excellent videos from Andrew Ng,
A super big read, right? Did you find some other blog/video/book useful, which could be super-cool for others as well? Well, you’re at the right place! Send an email at firstname.lastname@example.org to showcase your resource in this story ( the credits would of course go with you! ).
Goodbye and a happy Deep Learning journey!