CoOp icon indicating copy to clipboard operation
CoOp copied to clipboard

Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)

Results 55 CoOp issues
Sort by recently updated
recently updated
newest added

Assuming the experimental dataset is coco2014, how should I define this classnames?

Hi Kaiyang, thanks for you amazing work! I obtain.cross_entropy(output, label) is used in your training. I wonder if it is possible to replace it with similarity between text and image,...

Thank you for your contribution. I found that the training is slower when using multi-gpus (e.g., 8 gpus) than single gpu. Do you know why is it and how to...

Hi, congratulations on your wonderful work! Could you please provide the raw data you used in Figure 3 of your paper? My email is [email protected] Many thanks!

So, you might find OpenAI's [code](https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb) produces around 59% accuracy for zero-shot CLIP (`vision_model=RN50`) on ImageNet with prompt ensembling, but CoOp's code gives only 57.81% for the same model (see...

Hello! As described in readme, CoOP is used for valid datasets in CoOp/configs/datasets/. If I want to try CoOP for my own datasets, How can I do? Looking forward to...

这需要自己重头训练吗?不能比如加载训练好的权重去预测文字或图片的向量?像原始clip那种

Hello, I would like to ask you a question, when we do linear-probe-cilp experiment (vit-B/32), we should set which parameters to be tunable. Is it clip_model.visual.ln_post and clip_model.ln_final?

Hi, thanks for the great work, but I found that it is hard to reproduce the results in the paper. For example, using the released checkpoints in [https://github.com/KaiyangZhou/CoOp#models-and-results](https://github.com/KaiyangZhou/CoOp#models-and-results), the results...

I have tried to change the optimizer attributes to an ADAM optimizer with different LR scheduling and ADAM specific parameters, but when run, it overwrites the LR Scheduler parameters and...