pix2pix.pytorch
pix2pix.pytorch copied to clipboard
A pytorch implementation of "Image-to-Image Translation with Conditional Adversarial Networks"
Image-to-Image Translation with Conditional Adversarial Networks
Install
- install fantastic pytorch and pytorch.vision
Datasets
- Download images from author's implementation
- Suppose you downloaded the "facades" dataset in /path/to/facades
Train with facades dataset (mode: B2A)
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --mode B2A --exp ./facades --display 5 --evalIter 500- Resulting model is saved in ./facades directory named like net[D|G]_epoch_xx.pth
Train with edges2shoes dataset (mode: A2B)
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/edges2shoes/train --valDataroot /path/to/edges2shoes/val --mode A2B --exp ./edges2shoes --batchSize 4 --display 5
Results
- Randomly selected input samples

- Corresponding real target samples

- Corresponding generated samples

Note
- We modified pytorch.vision.folder and transform.py as to follow the format of train images in the datasets
- Most of the parameters are the same as the paper.
- You can easily reproduce results of the paper with other dataets
- Try B2A or A2B translation as your need
Reference
- pix2pix.torch
- pix2pix-pytorch (Another pytorch implemention of the pix2pix)
- dcgan.pytorch
- FANTASTIC pytorch pytorch doc
- genhacks from soumith