gan-transfer
gan-transfer copied to clipboard
Supplementary code for "When, Why, and Which Pretrained GANs Are Useful?" (ICLR'22)
When, Why, and Which Pretrained GANs Are Useful?
This repository contains supplementary code for the ICLR'22 paper When, Why, and Which Pretrained GANs Are Useful? by Timofey Grigoryev*, Andrey Voynov*, and Artem Babenko.
TL;DR:
The paper aims to dissect the process of GAN finetuning. The take-aways:
- Initializing the GAN training process by a pretrained checkpoint primarily affects the model's coverage rather than the fidelity of individual samples;
- Measuring a recall between source and target datasets is a good recipe to choose an appropriate GAN checkpoint for finetuning;
- For most of the target tasks, Imagenet-pretrained GAN, despite having poor visual quality, is an excellent starting point for finetuning.
Code
Here we release the StyleGAN-ADA Imagenet checkpoints at different resolutions that commonly act as superior model initialization. These checkpoints are compatible with the official StyleGAN-ADA repository
We also release the GAN-transfer playground code.
Citation
@misc{www_gan_transfer_iclr22,
title={When, Why, and Which Pretrained GANs Are Useful?},
author={Timofey Grigoryev and Andrey Voynov and Artem Babenko},
year={2022},
eprint={2202.08937},
archivePrefix={arXiv},
primaryClass={cs.LG}
}