taming-transformers icon indicating copy to clipboard operation
taming-transformers copied to clipboard

Taming Transformers for High-Resolution Image Synthesis

Results 170 taming-transformers issues
Sort by recently updated
recently updated
newest added
trafficstars

scripts/taming-transformers.ipynb have some code issue while runtime due to wrong syntax so i did some changes you can see here https://colab.research.google.com/drive/1m11MAdnuFv-9BW2RPKYRgtzaqOOI01tp?usp=sharing

Following @rom1504 's instructions for a custom dataset. I have no classes and have trained a model that is producing reconstructions. When attempting to sample by running sample_fast.py, I am...

This pull request makes it possible to do: ``` pip install git+https://github.com/CompVis/taming-transformers.git ``` I have done this by: 1. Adding `__init__.py` to the module directories 2. Moving a couple of...

I was trying to calculate the PSNR, SSIM, FID, LPIPS metrics for the Dalle model using 256x256 imagenet validation set but I am getting very different numbers. Can you please...

the edtion of academictorrents is not mentioned in environment.yaml. But academictorrents is needed in imagenet.py. For now, I am stuck in the compatibility between academictorrents, future and pytorch-ligtning. So, I...

Hi, after downloading "2021-04-23T18-11-19_celebahq_transformer", no module cutlit can be found in the code. What's wrong with this?

Most likely a CUDA version compatibility issue. What versions of CUDA has VQ-GAN training been tested on?

Hi, in your paper Fig4, you given a input image as condition and then the model generate diverse results, I'm curious about how can it generate diverse results since your...

Hi guys, I'm trying to fine-tune a VQ_GAN from imagenet with my own dataset. However I was quite confused by this point." Creating 2 text file pointing at your folder"....

I have 512x512 pixel images I would like to do image2image translation on.