taming-transformers
taming-transformers copied to clipboard
Taming Transformers for High-Resolution Image Synthesis
I have just started training on 200k 256x256 custom dataset and I'm surprise to see it requires 31 hours to train 1 epoch on three 16GB fast GPUs. So my...
Hi, Thanks for your amazing work. I saw several VQGAN training methods in your code, including exponential moving average and gumbel sampling. Could you give some hints about the performance...
 What are the conditions,when the model can produce?
This pull request makes it possible to do: ``` pip install git+https://github.com/CompVis/taming-transformers.git ``` I have done this by: 1. Using `find_namespace_packages()` instead of `find_packages()` so setuptools can find packages without...
Hi, thanks for the great work and sharing it with the community! I have downloaded the provided samples from https://app.koofr.net/links/520f5f92-1633-4af0-b545-0bf410314b54 (`ffhq_k300_p1.0_fid9.6.zip`) and ran an FID calculation script (based on torchmetrics)...
Could it be possible for you guys to add the .yaml config files used for the edge-to-image translation example using ImageNet shown in the paper? Specially the one for the...
It seems a combo of a popular colab notebook and a popular youtuber mentioning said colab notebook has resulted in some pretty slow download times for the 16k VQGAN. It...
Hi, I have a couple questions. First is related to the reasonable custom data set size in order to get good quality images. I am planning to use a data...
When I run the following command in Anaconda Powershell from the documentation I get an error for having the comma at the end. The documentation says you can do this...
Hi, there. I met with such a warning that "You are using `LearningRateMonitor` callback with models that have no learning rate schedulers. Please see documentation for `configure_optimizers` method." when training...