CLIP-KD icon indicating copy to clipboard operation
CLIP-KD copied to clipboard

How to run your model on CLIP_benchmark?

Open xiaohoua opened this issue 10 months ago • 1 comments

How to run your model on CLIP_benchmark?

clip_benchmark eval --dataset=tfds/cifar10 --task=zeroshot_classification --pretrained=laion400m_e32 --model=ViT-B-32-quickgelu --output=result.json --batch_size=64 this run succes on my machine, but

clip_benchmark eval --dataset=cifar10 --task=zeroshot_classification --pretrained=./test_model/ViT-L-14_laion400m_kd_ViT-B-16_cc3m_12m_ep32.pt --model=ViT-L-14 --output=result.json --batch_size=64 this get an error:


  File "/data/home/miniconda3/envs/clip_benchmark/lib/python3.8/site-packages/torch/serialization.py", line 1096, in load
    raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options 
        (1) Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
        (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
        WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` to allowlist this global if you trust this class/function.

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

Can you help me solve this problem ? And here is the cold how CLIP_benchmark load open_clip model:

def load_open_clip(model_name: str = "ViT-B-32-quickgelu", pretrained: str = "laion400m_e32", cache_dir: str = None, device="cpu", **kwargs):
    model, _, transform = open_clip.create_model_and_transforms(model_name, pretrained=pretrained, cache_dir=cache_dir)
    model = model.to(device)
    tokenizer = open_clip.get_tokenizer(model_name)
    return model, transform, tokenizer

xiaohoua avatar Feb 20 '25 12:02 xiaohoua

And when i try :

import open_clip 
model, _, transform = open_clip.create_model_and_transforms(model_name="ViT-L-14", pretrained="./test_model/ViT-L
-14_laion400m_kd_ViT-B-16_cc3m_12m_ep32.pt")

it get same error. but when i load open_clip model by : model, _, transform = open_clip.create_model_and_transforms(model_name="ViT-L-14", pretrained="/clip_model/ViT-L-14__laion400m_e32/open_clip_pytorch_model.bin") it's ok.

xiaohoua avatar Feb 20 '25 12:02 xiaohoua