mage
mage copied to clipboard
A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Hi, could you release the detailed code and configs w.r.t class-conditional generation? (e.g. the finetuning epochs and lr, etc.) Thanks a lot!
As the title says, I can not find the contrastive learning loss in your codebase.
Just finished reading your new paper **Autoregressive Image Generation without Vector Quantization**, which is very INSPIRING!! Would you consider publicizing the code?
when I using --model vit_base_patch16 to train and mage-vitb-ft.pth as ckpt,A issues coming: Traceback (most recent call last): File "main_finetune.py", line 355, in main(args) File "main_finetune.py", line 250, in main...