Does the model work in scenarios with missing modalities?
Thank you for your wonderful project.
Can this language-binding model handle scenarios where a modality is missing? Specifically, is it possible to perform inference without the audio modality, and if so, would it significantly impact performance?
Lastly, is there a model checkpoint that was trained without a specific modality? I’m planning to use this model architecture for my project, but I don’t intend to use the audio modality.
Thank you so much, Sincerely
Hello, LanguageBind has separated models for each modality, i.e., it's not a single block that processes everything at once. You can load the required modalities by setting the clip_type variable from the Readme inference example.
Each modality's embeddings are computed individually, and then you combine them.
Quote from related ImageBind paper. I guess the same works here https://github.com/PKU-YuanGroup/LanguageBind/issues/38
Embedding arithmetic. For arithmetic, we again use the embedding features after temperature scaling. We ℓ2 normalize the features and sum the embeddings after scaling them by 0.5.
They used OpenCLIP weights to initialize each modality model and fine-tune it with an additional linear projection layer in a multi-modal joint manner. So, there are no general intermediate checkpoints.
@e1four15f Oh, I got it! Thank you for your kind reply. I’ll check the checkpoint ✌️