CLIP
CLIP copied to clipboard
@sanjaygunda13 I never tried the processing in Base64, maybe you need to try to decode the Byte64 into PIL format first.
@sanjaygunda13 I never tried the processing in Base64, maybe you need to try to decode the Byte64 into PIL format first.
Originally posted by @vinson2233 in https://github.com/openai/CLIP/issues/83#issuecomment-1186712343
thanks for the update now i am able to train the clip but not able to load the saved model getting serialization error i am not sure what mistake i have made. let me know if you have any leads on below error.
untimeError Traceback (most recent call last) Input In [8], in <cell line: 7>() 2 # model, preprocess = clip.load("ViT-B/32", device=device, jit=False) 6 model, preprocess = clip.load("ViT-B/32",device=device,jit=False) #Must set jit=False for training ----> 7 checkpoint = torch.load("/home/sanjay/model_data/model_10.pt") 9 # # Use these 3 lines if you use default model setting(not training setting) of the clip. For example, if you set context_length to 100 since your string is very long during training, then assign 100 to checkpoint['model_state_dict']["context_length"] 10 # checkpoint['model_state_dict']["input_resolution"] = model.input_resolution #default is 224 11 # checkpoint['model_state_dict']["context_length"] = model.context_length # default is 77 12 # checkpoint['model_state_dict']["vocab_size"] = model.vocab_size 14 model.load_state_dict(checkpoint['model_state_dict'])
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:594, in load(f, map_location, pickle_module, **pickle_load_args) 592 opened_file.seek(orig_position) 593 return torch.jit.load(opened_file) --> 594 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) 595 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:853, in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args) 851 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args) 852 unpickler.persistent_load = persistent_load --> 853 result = unpickler.load() 855 torch._utils._validate_loaded_sparse_tensors() 857 return result
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:845, in _load.
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:834, in _load.
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:175, in default_restore_location(storage, location) 173 def default_restore_location(storage, location): 174 for _, _, fn in _package_registry: --> 175 result = fn(storage, location) 176 if result is not None: 177 return result
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/serialization.py:157, in _cuda_deserialize(obj, location) 155 return storage_type(obj.size()) 156 else: --> 157 return obj.cuda(device)
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/_utils.py:79, in cuda(self, device, non_blocking, **kwargs) 77 else: 78 new_type = getattr(torch.cuda, self.class.name) ---> 79 return new_type(self.size()).copy(self, non_blocking)
File ~/.conda/envs/clip_prefix_caption/lib/python3.9/site-packages/torch/cuda/init.py:462, in _lazy_new(cls, *args, **kwargs) 459 _lazy_init() 460 # We may need to call lazy init again if we are a forked child 461 # del _CudaBase.new --> 462 return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.90 GiB total capacity; 1.39 GiB already allocated; 10.75 MiB free; 1.47 GiB reserved in total by PyTorch)
pytorch is really memory sensitive. make sure your CUDA memory is not occupied. use nvidia-smi
command in the terminal to see GPU memory usage