taming-transformers
taming-transformers copied to clipboard
Same token for conditioning as well as images
Hello,
Thanks for an awesome work! I have a small doubt.
I noticed that in the second stage (GPT), all the conditional generation tasks (from class/segmap etc.), same tokens are used for conditioning as well as for images. Is my understanding correct? If it is, did you try using different vocabularies for conditional image and image to be synthesized?
There is some remapping code present in quantize.py, but I don't see it is used anywhere. Was it written for this purpose? If I use, would you kindly be able to provide a sample remap
file.
Hi @kampta, do you know what is the meaning of remapping now?
Not sure but it looks like the goal of remapping is to reassign vector indices to a subset of values (instead of default [0, n_e-1]
). This might be done for the case when during sampling, the transformer model predicts a vector index, never predicted during training.
same question