fmdmm
fmdmm
> Hi, I want to ask you @shengyi4 what the situation is like after fine tuning. I and your server situation is similar, the default freeze_vit: False is very confused,...
> Finetuning all ViT layers cost significantly more GPU. > > You may want to try to max out the GPU memory by finetuning a fraction of layers. Thank you...
> Hi, thanks for your excellent work! When I finetune the pretrained model weights on the VQA-v2 dataset, I found an issue. In your paper said, the extracted image features...
> Hi, thanks for your excellent work! When I finetune the pretrained model weights on the VQA-v2 dataset, I found an issue. In your paper said, the extracted image features...
@kondvit I have tried training the BLIP2 on a single GPU of 3090 with limit 24GB GPU memory, run the pretrain_stage2.sh,only opt-2.7b is not enough.And,when set batch = 32,it can...
The question is similar: [https://github.com/salesforce/LAVIS/issues/125](url)