x-clip icon indicating copy to clipboard operation
x-clip copied to clipboard

Suggest your favorite papers to add!

Open lucidrains opened this issue 3 years ago • 20 comments

will start with

  1. FILIP https://arxiv.org/abs/2111.07783
  2. CLOOB https://arxiv.org/abs/2110.11316
  3. https://arxiv.org/abs/2110.05208

lucidrains avatar Dec 01 '21 02:12 lucidrains

Florence https://arxiv.org/abs/2111.11432

Mut1nyJD avatar Dec 01 '21 18:12 Mut1nyJD

Would it be possible to explicitly target the same API created by open ai for their CLIP? This way it can be used as a drop-in replacement in e.g. CLIP-guidance notebooks (but anywhere else CLIP is used as well, which is a lot of places).

I think this would basically amount to using the same function signatures for clip.load(), encode_image, encode_text, etc. Not sure how limiting that could be in practice.

afiaka87 avatar Dec 01 '21 20:12 afiaka87

sure! but in also thinking of extending this to any number of modalities (audio, biosequences, etc)

lucidrains avatar Dec 02 '21 04:12 lucidrains

LiT: Zero-Shot Transfer with Locked-image Text Tuning https://arxiv.org/abs/2111.07991 and in particular I think it would be interesting to be able to somehow transfer weights of existing models (clip image and text encoders but also other pretrained encoders) to this implementation somehow, and then continue training do you think there could be some good ways?

rom1504 avatar Dec 06 '21 10:12 rom1504

MURAL: Multimodal, Multitask Retrieval Across Languages: https://arxiv.org/abs/2109.05125

RenShuhuai-Andy avatar Dec 09 '21 08:12 RenShuhuai-Andy

Combined Scaling for Zero-shot Transfer Learning

https://arxiv.org/abs/2111.10050

haofanwang avatar Dec 10 '21 09:12 haofanwang

LiT: Zero-Shot Transfer with Locked-image Text Tuning https://arxiv.org/abs/2111.07991 and in particular I think it would be interesting to be able to somehow transfer weights of existing models (clip image and text encoders but also other pretrained encoders) to this implementation somehow, and then continue training do you think there could be some good ways?

yup, i think it'll end up something like

clip = CLIP(
    vision_model = vit_transformer,
    text_model = text_transformer,
    ...
)

lucidrains avatar Dec 11 '21 18:12 lucidrains

CLIP-Lite: Information Efficient Visual Representation Learning from Textual Annotations: https://arxiv.org/pdf/2112.07133.pdf

antofuller avatar Dec 16 '21 23:12 antofuller

RegionCLIP: https://arxiv.org/abs/2112.09106v1

They encourage region-level representations by using the released CLIP to both detect objects and to generate region-level captions for objects in a scene which becomes the dataset for finetuning an object detection task. Still reading but I believe it's a Microsoft paper.

afiaka87 avatar Dec 18 '21 20:12 afiaka87

Hi, I would just to ask if it is possible to make your models scriptable? It looks like lambda functions make it problematic for normal user. Good thing about torchscript is, that it would export to onnx, tensorrt, etc ...

batrlatom avatar Dec 19 '21 22:12 batrlatom

https://github.com/facebookresearch/SLIP they combine the losses of CLIP (vision+language) and SimCLR (vision) and get better zero shot accuracy on a 15M dataset than clip on the same dataset Hopefully accuracies would be even better at large scale

rom1504 avatar Dec 24 '21 10:12 rom1504

https://github.com/FreddeFrallan/Multilingual-CLIP works pretty well although they used very little resources Basically they took an existing text model and aligned with the existing clip image

Here's one example showing it works well :

Searching for blue dress in korean

With clip

https://rom1504.github.io/clip-retrieval/?back=https%3A%2F%2Fknn.laion.ai&index=laion_400m_128G&useMclip=false&query=%ED%8C%8C%EB%9E%80+%EB%93%9C%EB%A0%88%EC%8A%A4

With mclip

https://rom1504.github.io/clip-retrieval/?back=https%3A%2F%2Fknn.laion.ai&index=laion_400m_128G&useMclip=true&query=%ED%8C%8C%EB%9E%80+%EB%93%9C%EB%A0%88%EC%8A%A4

(Many other examples can be tried on that ui)

I think we may be able to learn something from their approach

Edit: in practice I believe we already have what we need in the code here : the ability to plug some text encoder

rom1504 avatar Dec 25 '21 22:12 rom1504

https://arxiv.org/abs/2112.09133

Any plan to implement MaskFeat? @lucidrains

haofanwang avatar Jan 25 '22 09:01 haofanwang

@haofanwang ohh nope, this doesn't look like it is related to contrastive learning

i could add it to https://github.com/lucidrains/vit-pytorch , but i'd have to understand HOGs better

lucidrains avatar Jan 27 '22 01:01 lucidrains

@lucidrains BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation, code

this is a great paper :) but it also already came with code!

lucidrains avatar Mar 02 '22 16:03 lucidrains

Hi @lucidrains ,

I hope you are doing fine? We miss over at the EAI discord!

This could be very interesting for x-clip: “FLAVA - A Foundational Language And Vision Alignment Model”, https://arxiv.org/abs/2112.04482

However, the official code seems to be on the way too: https://github.com/facebookresearch/mmf/issues/1219#issuecomment-1082160255 & https://github.com/facebookresearch/multimodal

All the best, Michael

MicPie avatar Apr 24 '22 08:04 MicPie

@MicPie hey Michael! miss you too :heart: thanks for the share, i'll give it a read later tonight after i finish some code

lucidrains avatar Apr 25 '22 19:04 lucidrains

Looks interesting: "CoCa - Contrastive Captioners are Image-Text Foundation Models" https://arxiv.org/abs/2205.01917

“Unlike standard decoder transformers, CoCa omits cross-attention in the first half of the decoder layers to encode unimodal text representations, and cascades the rest of the decoder layers, cross-attending to the image encoder for multimodal image-text representations.”

MicPie avatar May 05 '22 08:05 MicPie

Florence https://arxiv.org/abs/2111.11432

Please refer to our UniCL repo on the core algorithm used in Florence: https://github.com/microsoft/UniCL

jwyang avatar May 21 '22 07:05 jwyang