IrwGAN
IrwGAN copied to clipboard
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation
IrwGAN (ICCV2021)
Unaligned Image-to-Image Translation by Learning to Reweight
[Update] 12/15/2021 All dataset are released, trained models and generated images of IrwGAN are released
[Update] 11/16/2021 Code is pushed, selfie2anime-danbooru dataset released.
Dataset
selfie2anime-danbooru | selfie-horse2zebra-dog | horse-cat2dog-anime | beetle-tiger2lion-sealion
Trained Models and Generated Images
- selfie2anime-danbooru IrwGAN | [Baseline] | [CycleGAN] | [MUNIT] | [GcGAN] | [NICE-GAN]
- selfie-horse2zebra-dog IrwGAN | [Baseline] | [CycleGAN] | [MUNIT] | [GcGAN] | [NICE-GAN]
- horse-cat2dog-anime IrwGAN | [Baseline] | [CycleGAN] | [MUNIT] | [GcGAN] | [NICE-GAN]
- beetle-tiger2lion-sealion IrwGAN | [Baseline] | [CycleGAN] | [MUNIT] | [GcGAN] | [NICE-GAN]
Basic Usage
- Training:
python main.py --dataroot=datasets/selfie2anime-danbooru
- Resume:
python main.py --dataroot=datasets/selfie2anime-danbooru --phase=resume
- Test:
python main.py --dataroot=datasets/selfie2anime-danbooru --phase=test
- Beta Mode
--beta_mode=Aif domain A is unaligned,--beta_mode=Bif domain B is unaligned,--beta_mode=ABif two domains are unaligned - Effective Sample Size
lambda_nos_Aandlambda_nos_Bare used to control how many samples are selected. The higher the weight, more samples are selected. We use1.0across all experiments.
Example Results
Citation
If you use this code for your research, please cite our paper:
@inproceedings{xie2021unaligned,
title={Unaligned Image-to-Image Translation by Learning to Reweight},
author={Xie, Shaoan and Gong, Mingming and Xu, Yanwu and Zhang, Kun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={14174--14184},
year={2021}
}