vq-vae-2-pytorch icon indicating copy to clipboard operation
vq-vae-2-pytorch copied to clipboard

Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch

Results 47 vq-vae-2-pytorch issues
Sort by recently updated
recently updated
newest added

I ran the program referring to issues#4, but only the following noise image is generated. What can I do to get good results? ![output](https://user-images.githubusercontent.com/110453457/182323112-1afd18ab-5f75-4691-9929-a5a3adf67647.png)

The single-gpu training process works just fine for me, and the output samples are satisfactory. However when I set `--n_gpu 2` or `--n_gpu 4`, the training process will get stuck...

In the pixel snail paper, it is able to generate a conditioned sample with some global condition _h_ I am just wondering if it is able to do that in...

Hi, in the paper, total loss consists of 3 parts as follows: ![image](https://user-images.githubusercontent.com/77531882/104930129-b38eee80-59df-11eb-80f9-d3838eee2b56.png) However, in the codes it seems that this loss is different (as follows) `loss = recon_loss +...

I have tried run 'python tain_vqvae.py --path '\home\lab\ffhq_dataset' 'in terminal, but there is a error 'module 'torch.distributed' has no ttributed 'launch' '. I read some other distributed training examples, and...

Hi , First of all thanks for the implementation. I have tried to train PixelSNAIL-bottom/top prior for 256(imagenet) and 512(gaming) resolution images but I found that both the models are...

Hi! I'm failing to understand the function of PixelSnail. **Is it to generate a latent space similar to a GAN?** I trained VQVAE correctly (until the samples were good enought):...

Hi, just now I successfully ran the train_pixelsnail.py on the top levels (size: 8×8) of my own dataset, which consists of 300,000 encoded outcome from images (size: 64×64) of different...

Hi, I ran the code as you suggested and completed all 420 epochs for hier top and bottom but the generated results are not good, you can see below. Please...

Feature request for AMP support in VQ-VAE training. So far, I tried naively modifying the `train` function in `train_vqvae.py` like so: ```python # ... for i, (img, label) in enumerate(loader):...