Transformer-MM-Explainability icon indicating copy to clipboard operation
Transformer-MM-Explainability copied to clipboard

Readability of CLIP notebook

Open josh-freeman opened this issue 2 years ago • 4 comments

  • deleted modification of clip.clip.MODELS as it is no longer needed (all models are now there by default, see last commit)

  • there was a repetition of 17 lines of code twice in interpret. I replaced it with a function. I also added to this function an assertion that checks if the model is coming from Hila Chefer's version of CLIP or an equivalent (https://github.com/hila-chefer/Transformer-MM-Explainability/tree/main/CLIP).

  • added a function mask_from_relevance for users who would want to display the attention mask in another way than show_cam_on_image.

josh-freeman avatar Jan 08 '23 07:01 josh-freeman

Oh I forgot to mention:

I also added a bit of documentation to the interpret function

josh-freeman avatar Jan 08 '23 07:01 josh-freeman

Hi @josh-freeman, thanks for your contribution to this repo! it'll take me some time to review and approve your PR since it contains a significant amount of changes, will get to it ASAP

hila-chefer avatar Feb 02 '23 14:02 hila-chefer

No worries, pretty sure it's mostly a bug coming from something like conversion of CRLF to LF tokens or something; I'm surprised it says I changed that much.

josh-freeman avatar Feb 02 '23 21:02 josh-freeman

Dear all,

I have one problem for ViLT. So I try to reproduce the VisualBERT for implementing in VILT. Could you point where is the save_visual_results function definition? I use ViLT for multimodal transformer but cannot use num_tokens = image_attn_blocks[0].attn_probs.shape[-1] to set the number of tokens. For example, VILT for VQA task and image is 384*384 size. The number of vision and text mixed token is 185 including cls token, so the vision token is 144 and the text token is 40 (max length).

Thanks very much

guanhdrmq avatar Aug 31 '23 01:08 guanhdrmq