XenonLamb
XenonLamb
Is there a way to enable zero3-offload for LLaMA-VID? I'm trying to integrate a LLM with higher GPU RAM usage to LLaMA-VID, which means I can't run it without offloading...
Hi! I'm trying to reproduce the video evaluation results for llama-vid-7b-full-224-video-fps-1, but after running the provided scripts with the official checkpoint and MSVD-QA, not all of the files are predicted:...
**Describe the bug** I tried to run the LLaMA-VID (https://github.com/dvlab-research/LLaMA-VID/tree/main) model under zero-3, and during initialization of the model, when creating the model's text encoder, BertLMHeadModel.from_pretrained("bert-base-uncased") caused the training script...
I ran the test_fc_aide_ft.py without any modification, but the following error occurs: `` tracking mask tracking mask tracking mask tracking mask tracking mask tracking mask tracking mask tracking mask tracking...