mage
mage copied to clipboard
A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Hi, could you release the detailed code and configs w.r.t class-conditional generation? (e.g. the finetuning epochs and lr, etc.) Thanks a lot!
As the title says, I can not find the contrastive learning loss in your codebase.
Just finished reading your new paper **Autoregressive Image Generation without Vector Quantization**, which is very INSPIRING!! Would you consider publicizing the code?
when I using --model vit_base_patch16 to train and mage-vitb-ft.pth as ckpt,A issues coming: Traceback (most recent call last): File "main_finetune.py", line 355, in main(args) File "main_finetune.py", line 250, in main...
Dear professor, thank you for your contribution! I trained VQGAN on my own dataset and found that there are some differences between the network structure of VQGAN and the original...
Dear professor, I now have a very strange problem. When I was running the fine-tuning, an unexpected problem occurred. The accuracy rate remained at 78.6 and would not be changed...
Hi :) I wanted to ask if you are still planning to release the training-script for MAGE-C. Best, Niklas