InternVideo icon indicating copy to clipboard operation
InternVideo copied to clipboard

About the text features used in Grounding task.

Open liangliangdeveloper opened this issue 1 year ago • 5 comments

Dear team,

Thanks for your great job.

I would like to know how to get the text feature for the grounding task.

I see that you utilize the LLAMA backbone with chinese_alpaca_lora_7b, however, I see a mismatch for the token dim.

The number of tokens for the tokenized original sentence and the number of tokens' dim in the text feature you extracted is always smaller by 5, which is a consistent number.

I want to know, except for the global token, are there any new tokens added in the sentences?

Thank you!

liangliangdeveloper avatar Nov 25 '24 08:11 liangliangdeveloper

Also, I am wondering if the text features you provide from the llama are the result of the original llama or the llama model fine-tuned on your data.

liangliangdeveloper avatar Nov 25 '24 09:11 liangliangdeveloper

Hi, can you provide a more detailed example of the first question? If you are referring to the Temporal Grounding task, the features we extract are not all less than 5 in token length, and the token length is positively correlated with the length of the word in the sentence. In addition, in the grounding task, we did not fine-tune the llama model

yinanhe avatar Nov 25 '24 09:11 yinanhe

Thank you for your reply!

image

For the first question, I see that you add a prefix: "summarize:" to all prompts. That's why the token number is different. So why do you use this design?

For another question, I am wondering if the llama is fine-tuned in CLIP training. Does the llama freeze all parameters or the lora layers is trainable in the CLIP training stage?

liangliangdeveloper avatar Nov 25 '24 09:11 liangliangdeveloper

The first question needs to be answered by @Andy1621 , and LoRA ft is used during the CLIP training stage.

yinanhe avatar Nov 26 '24 03:11 yinanhe

The first question needs to be answered by @Andy1621 , and LoRA ft is used during the CLIP training stage.

While loading state_dict for InternVideo2_CLIP, I also encounter the same problem as following: size mismatch for text_encoder.transformer.embed_tokens.weight: copying a param with shape torch.Size([49954, 4096]) from checkpoint, the shape in current model is torch.Size([32000, 4096]).

Does the checkpoint of InternVideo2_CLIP model mismatch the code?

WangChen100 avatar May 03 '25 04:05 WangChen100