AI is truly some incredible technology. This week I was messing around with building generative adversarial networks (GANs) with TensorFlow and this morning I woke up to some pretty incredible results as my models trained over night.
In short, GANs are when you have two artificial neural networks that compete against each other, one is a “counterfeiter” and the other is a “detective” (known in AI parlance as the generator and the discriminator respectively).
The goal is that the detective network is given a dataset images and told that its job is to learn the dataset, and then discern whether what is fed into it is a fake from the counterfeiter or a real image from the actual dataset. In this case, I used the MNIST dataset which is a dataset of 60,000 images of the number 1–9. Below is an example of one of the images.
The counterfeiter is fed complete noise at the start of training and the goal is for the counterfeiter to learn how to change the noise into a picture that will fool the detective network. Below is a picture of the original noise given to the counterfeiter at the start of training.
After each training loop, the counterfeiter is given feedback from the detective network as whether if it’s “hotter” or “colder” in terms of its ability to trick the detective network.
I woke up this morning while both networks trained, and saw how the counterfeiter was progressing.
After about 8.5 hours training on a TPU in Google Cloud, the counterfeiter is starting to change the noise into what clearly appears to be actual numbers.
That’s pretty incredible. Imagine learning how to write numbers as a child solely based on your teacher telling you after each attempt “That’s wrong, but you’re getting warmer!”
You’d probably have given up after 15 minutes. AI just keeps plowing through. 8.5 hours later, it’s learned how to write numbers correctly from practically nothing.
This is the future, right here.