ViT-Lens
ViT-Lens copied to clipboard
[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
I was trying to apply this model to my own data and not getting good results. I ran the NYUv2 dataset through my code, and the results seem to be...
Hi, I have checked the Clip-Vision embedding (last hidden state) of Blip2&InstructBlip on huggingface (instructblip-vicuna-7b), the dimension is 257x1408. However, the multi-modal matching space of ViT-Lens uses 1x768 dimension. I...
The justification in the paper for using disparity is "scale normalization". I know that this comes from OmniVore and ImageBind. However, this does not actually achieve scale normalization. What could...