pix2pix.pytorch icon indicating copy to clipboard operation
pix2pix.pytorch copied to clipboard

A pytorch implementation of "Image-to-Image Translation with Conditional Adversarial Networks"

Image-to-Image Translation with Conditional Adversarial Networks

Install

Datasets

Train with facades dataset (mode: B2A)

  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --mode B2A --exp ./facades --display 5 --evalIter 500
  • Resulting model is saved in ./facades directory named like net[D|G]_epoch_xx.pth

Train with edges2shoes dataset (mode: A2B)

  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/edges2shoes/train --valDataroot /path/to/edges2shoes/val --mode A2B --exp ./edges2shoes --batchSize 4 --display 5

Results

  • Randomly selected input samples input
  • Corresponding real target samples target
  • Corresponding generated samples generated

Note

  • We modified pytorch.vision.folder and transform.py as to follow the format of train images in the datasets
  • Most of the parameters are the same as the paper.
  • You can easily reproduce results of the paper with other dataets
  • Try B2A or A2B translation as your need

Reference