Credit: BecomingHuman

If you are a Beginner in Deep learning i suggest you check out these blogs.

**Intro to Deep Learning with pytorch**.**Data camp Free Deep learning tutorial**.

The portrait was offered by Christie’s for sale in New York from Oct 23 to 25 was created with AI algorithm called GAN’s(Generative Adversarial Networks) by the Paris-based collective Obvious, whose members include Hugo Caselles-Dupre, Pierre Fautrel and Gauthier Vernier. The work is estimated to fetch $7,000 to $10,000, according to the auction house.

To the surprise this was sold at auction for 4300% of its estimated price to an anonymous telephone bidder. It’s the first auction for an AI-generated portrait.

### Trending AI Articles:

1. Let’s build a simple Neural Net!

2. Decision Trees in Machine Learning

3. An intuitive introduction to Machine Learning

4. The Balance of Passive vs. Active A.I.

The AI-generated “Portrait of Edmond Belamy” depicts a slightly blurry chubby man in a dark frock-coat and white collar, and his off-centre position leaves enough white space to show the artist’s signature as “min G max D Ex[log(D(x))] + Ez[log(1-D(G(z)))] after a section of the algorithm’s code which is the loss function of the original GAN model.

and also they added We chose the name “Belamy” to make a reference to the creator name of GANs, I. Goodfellow, that roughly translate to “Bel ami” in French.

tataa ! stroy time.

One night in 2014, Ian Goodfellow went drinking to celebrate with a fellow doctoral student who had just graduated. At Les 3 Brasseurs (The Three Brewers), a favorite Montreal watering hole, some friends asked for his help with a thorny project they were working on: a computer that could create photos by itself.

Researchers were already using neural networks, algorithms loosely modeled on the web of neurons in the human brain, as “generative” models to create plausible new data of their own. But the results were often not very good: images of a computer-generated face tended to be blurry or have errors like missing ears. The plan Goodfellow’s friends were proposing was to use a complex statistical analysis of the elements that make up a photograph to help machines come up with images by themselves. This would have required a massive amount of number-crunching, and Goodfellow told them it simply wasn’t going to work.

But as he pondered the problem over his beer, he hit on an idea. What if you pitted two neural networks against each other? His friends were skeptical, so once he got home, where his girlfriend was already fast asleep, he decided to give it a try. Goodfellow coded into the early hours and then tested his software. It worked the first time.

What he invented that night is now called a GAN, or “generative adversarial network.” The technique has sparked huge excitement in the field of machine learning and turned its creator into an AI celebrity.

Mario Klingemann is considered a pioneer in the field of neural networks, computer learning, and AI art. Klingemann agrees that the portrait sold at Christie’s doesn’t necessarily represent anything new or groundbreaking. “In the case of Obvious, for me, that is kind of the most basic way of doing it. Throw a few thousand paintings in a folder, take default parameters, train it, and have it generate lots of variations, and then pick the ones that for whatever reason you think are shareable with the world”, he says. “In this case, It’s more like an Instagram filter”.

Explaining GANs, Klingemann is keen to stress that “these models do not have any intention to be creative”. Amongst the community of coders and artists who use GANs for creative purposes, they’re simply a tool or a medium. “When you work with oil paint, you work with your brushes and your media and in some sense, the pigments and the material still have their own behavior”, Klingemann explains.

The more measured among the critics also point out that ‘Obvious’, the trio responsible for the portrait, used a pre-existing code designed by 19-year old Robbie Barratt, which was available for download on the hosting website, **GitHub**.

on the other hand this is from the **Generative Learning Algorithm lecture of Andrew Ng is**

Algorithms that try to learn p(y|x) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels {0, 1}, (such as the perceptron algorithm) are called discriminative learning algorithms. Here, we’ll talk about algorithms that instead try to model p(x|y) (and p(y)). These algorithms are called generative learning algorithms. For instance, if y indicates whether an example is a dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’ features, and p(x|y = 1) models the distribution of elephants’ features. After modeling p(y) (called the class priors) and p(x|y), our algorithm can then use Bayes rule to derive the posterior distribution on y given x:

Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y = 0)p(y = 0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating p(y|x) in order to make a prediction, then we don’t actually need to calculate the denominator, since

### GAN Use Cases

for more usecases and applications, please check out this **link** with more details and explanation written by Deep learning practitioner Jonathan Hui.

Every week, new papers on Generative Adversarial Networks (GAN) are coming out and it’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers are naming these GANs!

So, here’s the current and frequently updated **list****, **where you got nearly a thousand of them.

As i am new to Deep learning and especially to GAN’s this is what all i could able to understood up to now. if any one interested to implement GAN’s , then please check out these repositories

- Generative Adversarial Networks implemented in PyTorch and Tensorflow
- Deep Convolution Generative Adversarial Networks
- Image to Image translation in python
- CycleGAN’s
- TLGAN’s

Use supervised learning to illuminate the latent space of GAN for controlled generation and edit below

**TL-GAN: transparent latent-space GAN**

The street scene below is generated from a segmentation map. This can be transferred real scenes, video game scenes, or created from your imagination.

check out AI Art Gallery From recent **NeurIPS Workshop on Machine Leaning For Creativity and Design 2018.**

Recently on DEC- 2018 NVIDIA released Style- Based Generative Architecture for Generative Adversarial Networks

Paper here →**link**

video below:

check out the tutorials and links here:

**CVPR Tutorial on GANs**- Deep learning in python- Datacamp
- Deep learning -Spring 2019
- Fast.ai
- Open ai
- Kadenze course
- deeplearning.ai

**People to follow**:

- Robbie barrat
- Shakir mohamed
- Hardmaru
- Yann LeCun
- Mario Klingemann
- Sebastian Raschka
- Rachel Thomas
- Soumith Chintala
- Jeremy Howard
- Ian Goodfellow
- Francois Chollet
- Andrew Ng
- Andrej Karpathy
- Pieter Abbeel

Thank you all who gave idea and supported me to write and to understand these concepts → Rohan Dhupar, Prateek Ralhan, Mohammad Shahebaz, and Akshay bahadur. thank you guys.

Thanks for reading this post until the end, I’m really glad to find people who’re as motivated as I am about Artificial intelligence and Deep learning.

**References**:

- Ian Goodfellow paper
- Al Gharakhanian article.
- Technology review article.
- “Generative Learning algorithms” — Andrew Ng’s Stanford notes
- Math behind GAN’s
- Papers with Code

please feel free to connect and talk to me on likedin below

### Don’t forget to give us your 👏 !

Credit: BecomingHuman By: purnasai gudikandula