CLIP
CLIP copied to clipboard
"None" gradients when using fine-tuned CLIP
Using the following snippet gives "none" values for some CLIP parameters such as positional_embedding
,
for name, p in self.model.named_parameters():
print(p.grad)
where
myclip, _ = clip.load(args.clip_vision_encoder, jit=False)
checkpoint = torch.load(args.clip_path)
myclip.load_state_dict(checkpoint['state_dict'])
self.model = myclip.float()
Is this behavior expected?