ImageBind icon indicating copy to clipboard operation
ImageBind copied to clipboard

Get the multimodal embeddings

Open hessaAlawwad opened this issue 1 year ago • 2 comments

Thank you for the great model.

I wonder how can I get the multimodat embedding of different inputs like image and its caption usign Imagebind?

if I can get that then how can it be compared to CLIP?

hessaAlawwad avatar May 21 '24 13:05 hessaAlawwad

Please bear with me if my questions does not make sense but I am still learning. I see after I give it an input consist of two modality ( text and image) it retrurn two different embeddings.

# Load data
inputs = {
    ModalityType.TEXT: data.load_and_transform_text(text_list, device),
    ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device),
}

with torch.no_grad():
    embeddings = model(inputs)

print(embeddings[ModalityType.VISION])
print(embeddings[ModalityType.TEXT])

but I couldn't find something about getting one embeddings for the both.

hessaAlawwad avatar May 22 '24 05:05 hessaAlawwad

As far as I know, there is no multimodal embedding. You got embedding for each modality and compare them (Text * Image) to see whether they match that the ideas of Imagebind and CLIP in short.

If you insist to get one, naive way is to add or take the average of them, however I believe it is not appealling.

lixinghe1999 avatar Jun 25 '24 02:06 lixinghe1999