The Rise of Generative Adversarial Networks

Kailash Ahirwar
10 min readMar 28, 2019
Credit: https://en.wikipedia.org/wiki/Edmond_de_Belamy

5 years back, Generative Adversarial Networks(GANs) started a revolution in deep learning. This revolution has produced some major technological breakthroughs. Generative Adversarial Networks were introduced by Ian Goodfellow and others in the paper titled “Generative Adversarial Networks” — https://arxiv.org/abs/1406.2661. Academia accepted GANs with open hands and industry welcomed GANs with much fanfare too. The rise of GANs was inevitable.

First, the best thing about GANs is their nature of learning, which is unsupervised. GANs don’t need labeled data, which makes GANs powerful as the boring work of data labeling is not required.

Second, the potential use-cases of GANs have put GANs at the center of conversations. They can generate high-quality images, enhance photos, generate images from text, convert images from one domain to another, change the appearance of the face image as age progresses and many more. The list is endless. We will cover some of the widely popular GAN architectures in this article.

Third, the endless research put around GANs is so mesmerizing that it grabs the attention of every other industry. We will be talking about major technological breakthroughs in the later section of this article.

The Birth

Generative Adversarial Network or GAN for short is a setup of two networks, a generator network, and a discriminator network. These two networks can be neural networks, ranging from convolutional neural networks, recurrent neural networks to auto-encoders. In this setup, two networks are engaged in a competitive game and trying to outdo each other, simultaneously, helping each other at their own tasks. After thousands of iterations, if everything goes well, the generator network gets perfect at generating realistically looking fake images and the discriminator network gets perfect at telling whether the image shown to it is fake or real. In other terms, the generator network transforms a random noise vector from a latent space(Not all GANs sample from a latent space) to a sample from real data set. Training a GAN is a very intuitive process. We simultaneously train both networks and they both get better with time.

GANs have plenty of real-world use cases like image generation, artwork generation…

Kailash Ahirwar

Decentralizing Artificial Intelligence | Co-founder - Raven Protocol | Mate Labs | Author - Generative Adversarial Networks Projects