Video-LLaMA
Video-LLaMA copied to clipboard
The amount of updated parameters during stage1 and stage2 ?
trafficstars
Great project !
I would like to ask 3 questions to learn:
1.Does your public checkpoint include the parameters of the 2-layer Q-former and the linear projection layer?
2.Seeing that the freeze_qformer is set to True in your stage1 and stage2 yaml files, is it because you have frozen the parameters of the Q-former and only fine-tuned the llama_proj? But I saw that the parameters of the Q-former were fine-tuned on your model diagram.
3. Is the amount of parameters fine-tuned the same in the pre-training stage1 and fine-tuning stage2 ?
Thank you very much~
- Yes
- we froze the image-Q-Former, which is 12-layer. The video-Q-Former is not frozen.
- Yes