ConSERT
ConSERT copied to clipboard
OsError when running main.py
I've been running into this issue when I run bash scripts/unsup-consert-base.sh
Traceback (most recent call last):
File "main.py", line 327, in <module>
main(args)
File "main.py", line 185, in main
word_embedding_model = models.Transformer(args.model_name_or_path, attention_probs_dropout_prob=0.0, hidden_dropout_prob=0.0)
File "/home/qmin/ConSERT/sentence_transformers/models/Transformer.py", line 36, in __init__
self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
File "/home/qmin/ConSERT/transformers/modeling_auto.py", line 629, in from_pretrained
pretrained_model_name_or_path, *model_args, config=config, **kwargs
File "/home/qmin/ConSERT/transformers/modeling_utils.py", line 954, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Is there any workaround?
I suspect that the ./bert-base-uncased/pytorch_model.bin file may be broken or incomplete, could you please check the file size and make sure the number is consistent with the online file size. You can also try loading the checkpoint file with pytorch: import torch; data = torch.load("./bert-base-uncased/pytorch_model.bin", map_location="cpu") and see whether the pytorch can successfully load the checkpoint file. If anything wrong, please try re-downloading the file from https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin.
As you expected, the problem was pytorch_model.bin
I re-downloaded it and now bash scripts/unsup-consert-base.sh works
Thank you :)