AlphaCLIP
AlphaCLIP copied to clipboard
[CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
Hi, congratulations on this awesome paper. Are you planning on finetuning the model on the CLIP VIT-H/14 architecture in the near future. Thanks in advance.
UnpicklingError Traceback (most recent call last) [](https://localhost:8080/#) in () 7 # load model and prepare mask transform 8 device = "cuda" if torch.cuda.is_available() else "cpu" ----> 9 model, preprocess =...
when I run the app.py in demo/with_llm, I find the "send" button is useless. I have download all the required checkpoints. This is my terminal error: ![image](https://github.com/SunzeY/AlphaCLIP/assets/129270219/e03bd209-55f6-40ac-b2b4-ee8c4a49bb6a) ![image](https://github.com/SunzeY/AlphaCLIP/assets/129270219/6238e34a-627a-4a6e-9c75-df15cc74dc4d)
Demo i deployed locally didn't work. There is no output in the terminal. ![image](https://github.com/SunzeY/AlphaCLIP/assets/76839625/e4c1c479-dd86-4eda-a507-32dbe1482aec)
Thank you for your work. Considering the captions in the GRIT dataset consist solely of noun words like _berries_, _person_ ... Did you use **Templates** to expand the captions, such...