Here, I introduce a simple code to implement PGGAN in Tensorflow 2.0.


The key idea of “PGGAN” is growing the generator and discriminator progressively. The approach speeds up training and makes learning much more stable. It can also produce high quality images.

Progressive Growing of GANs

Existing methods have learned image features of all resolutions at the same time, but this paper proposes the following a useful method for generating high-resolution images. By starting at a low resolution of “4x4”, the large-scale structure is first learned, and then gradually increased to “1024x1024” while learning finer scale details.

Image for post
Image for post

In order to double the resolution of…

In this post I will introduce an example of how to upgrade legacy tensorflow codes to new style. tensorflow 2.0 recommends using Keras which is a well known high level API. Their sequential style approach makes your code more simple. And the script would be worked line by line, so we can easily understand the working process.

Let me introduce a guide by my BEGAN repository

1. Prepare a dataset

For preparing the dataset, at the previous version, I made the ImageIterator class which provides preprocessing and iterator. Obviously, it is not quite hard, cos we can find a kind guide on the homepage…

In this post, I’ll introduce techniques for reconstructing a 3D face from a Photo. Creating an unreal character from a picture is familiar in these days. You may have seen it already at some application such as memoji and AR emoji as shown in below figure.

Image for post
Image for post
An example of creating Emoji from a Photo (AR emoji). Image from

Of course, these applications are interesting, but this time I want to talk about creating realistic 3D faces. In the future, you may see these techniques when you create your own avatar in a realistic video game series like Elder Scrolls or FIFA.

I’ve covered GAN and DCGAN in past posts. In 2017, Google published a great paper. The title of paper is “BEGAN: Boundary Equilibrium Generative Adversarial Network”. “BEGAN”, what a nice name it is? Also the results are great. The generated face image looks like an image of a training dataset.

Image for post
Image for post
BEGAN result images 128x128

The following contributions

  • A GAN with a simple yet robust architecture, standard training procedure with fast and stable convergence.
    •An equilibrium concept that balances the power of the discriminator against the generator.
    • A new way to control the trade-off between image diversity and visual quality.
    • An approximate measure of convergence. …

Image for post
Image for post

A data-set is needed to train the model. In the last article, we covered the model for generating faces. I used Celeb_A dataset(link) which has about 200k portraits photos to train the model.

In GAN model, we generally use dataset as ground-truth. We feeded “batch_x” to train discriminator in above code. How to get “batch_x” from images?

iterator, image_count = ImageIterator(data_root, batch_size, model.image_size, model.image_channels).get_iterator()

I made “ImageIterator” class to get “batch_x” iteraterable.

The function preprocess_image and load_and_preprocess_image read image file and do processing such as croping and normalizing. If we need, we can add other processing methods: flipping…

Since Ian Goodfellow’s paper, GAN has been applied to many fields, but its instability has always caused problems. The GAN has to solve the minimax(saddle point) problem, so this problem is intrinsic.

Image for post
Image for post
Funny image for saddle point. Image by

Many researchers attempted to solve these dilemma of GAN through various approache. Among them DCGAN has shown remarkable results. DCGAN proposes a stable GAN network structure. If you designed the model according to the guidelines of the paper, you can see that it is trained stably.

Architecture guidelines for stable Deep Convolutional GANs • Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator). • Use…

Machine learning is generally classified into three types: Supervised learning, Unsupervised learning and Reinforcement learning.

Image for post
Image for post
Category of machine learning. Image by

Understanding objects is the ultimate goals of supervised/unsupervised learning. We can classify the image using well trained discriminator model based on the data. Also we can create a sample image using well trained generator model.

What I cannot create, I do not understand.
-Richard Feynman

If I know about it, I will be able to create it. But, he also said, “What does it mean, to understand? … I don’t know.” Understanding objects is such a difficult task.

Ian Goodfellow introduced GANs(Generative Adversarial Networks) as…

moonhwan Jeong

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store