papernotes
papernotes copied to clipboard
A Style-Based Generator Architecture for Generative Adversarial Networks
Metadata
- Authors: Tero Karras, Samuli Laine, Timo Aila
- Organization: NVIDIA
- Conference: CVPR 2019
- Paper: https://arxiv.org/pdf/1812.04948.pdf
- Code: https://github.com/NVlabs/stylegan
TL;DR
- Propose a style-based generator for GANs inspired by AdaIN .
- The work is orthogonal to the GAN loss functions, regularizations, and hyperparameters.
- The style-based generator makes it possible to control the image synthesis via scale-specific modifications to the styles.
To see the reason for this localization, let us consider how the AdaIN operation first normalizes each channel to zero mean and unit variance, and only then applies scales and biases based on the style. The new per-channel statistics, as dictated by the style, modify the relative importance of features for the subsequent convolution operation, but they do not depend on the original statistics because of the normalization. Thus each style controls only one convolution before being overridden by the next AdaIN operation.