LanguageBind icon indicating copy to clipboard operation
LanguageBind copied to clipboard

Clarification questions about the framework

Open felmoreno1726 opened this issue 1 year ago • 4 comments

I'm trying to understand this in context of other works in the ecosystem. For example, I'm interested in video. For the video encoder, there is the LoRa tuned and the Fully-finetuned, can I use the embeddings from these models with an already trained LLM or model? Can I use these embeddings with Video-Llava? Can I use the LanguageBind encoder as a replacement for Video-Llava encoder (video tower)?

Also the demos shown in gradio, only show modality comparisons. I'm also trying to understand how do you do zero shot classifications. Thank you -- someone who is confused but excited and thankful for for the work done.

felmoreno1726 avatar Apr 19 '24 18:04 felmoreno1726

You can likely not just use the embeddings with an arbitrarily trained LLM model. The Idea of LanguageBind is to create a custom set of embeddings that is aligned to a specific set of text embeddings (the CLIP encoder, I think). I don't really understand what you want to do with video-llava embeddings here, but there is another issue in this repo with a similar question, that you can find. Zero-shot classification refers to when you validate the performance of LanguageBind on a new dataset, which was not used in any way for training the model. So no examples of a particular dataset have been seen by the model before.

lennartmoritz avatar Apr 22 '24 09:04 lennartmoritz

@lennartmoritz Hi, I wonder that during the pre-training process, dothe authors only use video-language or audio-language for training, or do they train with audio-video-depth-infrae-language? Thank you

XuecWu avatar May 14 '24 04:05 XuecWu

They use x-language training pairs where x denotes any of the supported modalities. So e.g. video-language, audio-language and depth-language, etc. are all used during training.

lennartmoritz avatar May 14 '24 14:05 lennartmoritz

They use x-language training pairs where x denotes any of the supported modalities. So e.g. video-language, audio-language and depth-language, etc. are all used during training.

Got it. Thank you for your reply.

XuecWu avatar May 15 '24 01:05 XuecWu