GAN-PyTorch
GAN-PyTorch copied to clipboard
A complete implementation of the Pytorch neural network framework for GAN
GAN-PyTorch
Overview
This repository contains an op-for-op PyTorch reimplementation of Generative Adversarial Networks.
Table of contents
- GAN-PyTorch
- Overview
- Table of contents
- Download weights
- Test
- Train
- Contributing
- Credit
- Generative Adversarial Networks
- Overview
Download weights
- Google Driver
- Baidu Driver access:
llot
Test
Modify the contents of the file as follows.
config.pyline 35mode="train"change tomodel="valid";config.pyline 79model_path=f"results/{exp_name}/g-last.pth"change tomodel_path=f"<YOUR-WEIGHTS-PATH>.pth";- Run
python validate.py.

Train
Modify the contents of the file as follows.
config.pyline 35mode="valid"change tomodel="train";- Run
python train.py.
If you want to load weights that you've trained before, modify the contents of the file as follows.
config.pyline 35mode="valid"change tomodel="train";config.pyline 51start_epoch=0change tostart_epoch=XXX;config.pyline 52resume=Falsechange toresume=True;config.pyline 53resume_d_weight=""change toresume_d_weight=<YOUR-RESUME-D-WIGHTS-PATH>;config.pyline 54resume_g_weight=""change toresume_g_weight=<YOUR-RESUME-G-WIGHTS-PATH>;- Run
python train.py.
Contributing
If you find a bug, create a GitHub issue, or even better, submit a pull request. Similarly, if you have questions, simply post them as GitHub issues.
I look forward to seeing what the community does with these models!
Credit
Generative Adversarial Networks
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
Abstract
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train
two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the
probability that a sample came from the training data rather than G. The training procedure for G is to maximize the
probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary
functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2
everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with
backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either
training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
[Paper] [Authors' Implementation]
@article{adversarial,
title={Generative Adversarial Networks},
author={Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio},
journal={nips},
year={2014}
}