Sunday, April 11, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Data Science

(With Images) Know everything about GANs (Generative Adversarial Network) in depth

November 11, 2020
in Data Science
(With Images) Know everything about GANs (Generative Adversarial Network) in depth
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Let’s understand the GAN(Generative Adversarial Network).

You might also like

Job Scope For MSBI In 2021

Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success

Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison

Generative Adversarial Networks were invented in 2014 by Ian Goodfellow(author of best Deep learning book in the market)  and his fellow researchers. The main idea behind GAN was to use two networks competing against each other to generate new unseen data(Don’t worry you will understand this further). GANs are often described as a counterfeiter versus a detective, let’s get an intuition of how they work exactly.

So we can think of counterfeiter as a generator. 

The generator is going to:

  • Receive random noise typically Gaussian or normal distribution of noise. 
  • And it is going to attempt to output the data often used for image data.

The discriminator:

  • Takes the data set consisting of real images from the real datasets and fake images from the generator.
  • Attempt to classify real vs fake images.

Keep in mind, regardless of your source of images whether it’s MNIST with 10 classes, the discriminator itself will perform Binary classification. It just tries to tell whether it’s real or fake.

So let’s actually see the process:

We first start with some noise like some Gaussian distribution of noise data and we feed directly into the generator. The goal of the generator is to create images that fool the discriminator.

In the very first stage of training, the generator is just going to produce noise.

And then we also grab images from our real dataset.

And then in PHASE1, we train the discriminator essentially labeling fake generated images as zeros and real data generated images as one. So basically zero if you are fake and one if you are real.

 We feed that into the discriminator and the discriminator gets trained to detect the real images versus the fake image. And then as time goes on the generator during the second PHASE of training is going to keep improving its images and trying to fool the discriminator, until it’s able to hopefully generate images that appear to mimic the real dataset and discriminator. Is no longer able to tell the difference between the false image and the real image.

So from the above example, we see that there are really two training phases:

  • Phase 1- Training Discriminator
  • Phase 2 – Train Generator

In phase one, what we do is we take the real images and we label them as one and they are combined with fake images from a generator labeled as zero. The discriminator then trains to distinguish the real images from fake images. Keep in mind that in phase one of training the backpropagation is only occurring on the discriminator. So we are only optimizing the discriminator’s weights during phase one of training.

Then in phase two, we have the generator produce more fake images and then we only feed the fake images to the generator with all the labels set as real. And this causes a generator to attempt to produce images that the images discriminator believes to be real. And what’s important to note here is that in phase two because we are feeding and all fake images labeled as 1, we only perform backpropagation on the generator weights in this step. So we are not going to be able to a typical fit call on all the training data as we did before. Since we are dealing with two different models(a discriminator model and generator model), we will also have two different phases of training.

What is really interesting here and something you should always keep in mind, the generators itself never actually sees the real images. It generates convincing images only based on gradients flowing back through the discriminator during its phase of training. Also, keep in mind the discriminator also improves as training phases continues, meaning the generated images will also need to hopefully get better and better in order to fold the discriminator.

This can lead to pretty impressive results. In the video, research has published many models such as style GANs and also a face GAN to actually produce fake human images that are extremely detailed. See below the example of face GAN performance from NVIDIA. IMPRESSIVE RIGHT????

Now let’s talk about difficulties with GANs networks,

Since GANs are more often used with image-based data and due to the fact that we have two networks competing against each other they require GPUs for reasonable training time. But fortunately, we have Google Collab with us to use GPUs for free.

Often what happens is the generator figure out just a few images or even sometimes a single image that can fool the discriminator and eventually “collapses” to only produce that image. So you can imagine back where it was producing faces, maybe it figured out how to produce one single face that fools the discriminator. Then the generator ends up just learning to produce the same face over and over again.

So in theory it would be preferable to have a variety of images, such as multiple numbers or multiple faces, but GANs can quickly collapse to produce the single number or phase whatever the dataset happens to be regardless of the input noise. 

This means you can feed in any type of random noise you want but the generator figured out the one image that it can use to fool the discriminator.

It is typically better to avoid the mode collapse because they are more complex and they have deeper layers to them.

There are a couple of different ways to overcome this problem is by using DCGAN(Deep convolutional GAN, this I will explain in another blog).

Researchers have also experimented with what’s known as “mini-batch discrimination”, essentially punishing generated batches that are all too similar. So if the generator starts having mode collapse and getting batches of very very similar looking images, it will begin to punish that particular batch inside of discriminator for having the images be all too similar.

It can be difficult to ascertain performance and appropriate training epochs since all the generated images at the end of the day are truly fake. So it’s difficult to tell how well our model is performing at generating images because a discriminate thinks something is real doesn’t mean that a human-like us will think of a face or a number looks real enough. 

And again due to the design of a GAN, the generator and discriminator are constantly at odds with each other which leads to performance oscillation between the two.

So while dealing with GAN you have to experiment with hyperparameters such as the number of layers, the number of neurons, activation function, learning rates, etc especially when it comes to complex images.

Conclusions

  • GANs are a very popular area of research! And often that the results are so fascinating and so cool that researchers even like to do this for fun, so you will see a ton of different reports on all sorts of GANs. 
  • So I would highly encourage you to make a quick search on Google Scholar for the latest research papers on GANs. Trust me you will see a paper on this topic every month.
  • Highly recommend you to play with GANs and gave fun to make different things and show off on social media.


Credit: Data Science Central By: Sameer Nigam

Previous Post

Researchers use machine learning algorithm to identify common respiratory pathogens

Next Post

Adobe releases new security fixes for Connect, Reader Mobile

Related Posts

Job Scope For MSBI In 2021
Data Science

Job Scope For MSBI In 2021

April 11, 2021
Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success
Data Science

Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success

April 11, 2021
Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison
Data Science

Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison

April 10, 2021
5 Dominating IoT Trends Positively Impacting Telecom Sector in 2021
Data Science

5 Dominating IoT Trends Positively Impacting Telecom Sector in 2021

April 10, 2021
Four Alternative Data Trends to Watch in 2021
Data Science

Four Alternative Data Trends to Watch in 2021

April 10, 2021
Next Post
Adobe releases new security fixes for Connect, Reader Mobile

Adobe releases new security fixes for Connect, Reader Mobile

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Why Machine Learning Over Artificial Intelligence?
Machine Learning

Why Machine Learning Over Artificial Intelligence?

April 11, 2021
27 million galaxy morphologies quantified and cataloged with the help of machine learning
Machine Learning

27 million galaxy morphologies quantified and cataloged with the help of machine learning

April 11, 2021
Machine learning and big data needed to learn the language of cancer and Alzheimer’s
Machine Learning

Machine learning and big data needed to learn the language of cancer and Alzheimer’s

April 11, 2021
Job Scope For MSBI In 2021
Data Science

Job Scope For MSBI In 2021

April 11, 2021
Basic laws of physics spruce up machine learning
Machine Learning

New machine learning method accurately predicts battery state of health

April 11, 2021
Can a Machine Learning Model Predict T2D?
Machine Learning

Can a Machine Learning Model Predict T2D?

April 11, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Why Machine Learning Over Artificial Intelligence? April 11, 2021
  • 27 million galaxy morphologies quantified and cataloged with the help of machine learning April 11, 2021
  • Machine learning and big data needed to learn the language of cancer and Alzheimer’s April 11, 2021
  • Job Scope For MSBI In 2021 April 11, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates