stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Can't load the model for 'openai/clip-vit-large-patch14'.

Open lichao252244354 opened this issue 3 years ago • 4 comments

run python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms error:

Can't load the model for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack

cloud you help me fix this bug

lichao252244354 avatar Oct 28 '22 06:10 lichao252244354

download from here https://huggingface.co/openai/clip-vit-large-patch14

lxlde avatar Nov 05 '22 02:11 lxlde

download from here https://huggingface.co/openai/clip-vit-large-patch14

Hi.There is the same problem. Which file should I download and where do I have to copy? Thank you.

shonsubong avatar Dec 12 '22 02:12 shonsubong

Make sure that you can get access to https://huggingface.co/ normally since the site-packages/transformers/tokenization_utils_base.py script will call site-packages/transformers/utils/hub.py to invoke site-packages/huggingface_hub/file_download.py and commit the model download. Any timeout error may cause failures. Please note the .cache directory under the working and user home directory. I fixed this under Linux system by manually creating the directories and files (bcz i could not get access to huggingface from my Linux machine 😢) . The directories for me are as follows: ~/.cache/huggingface/hub/ ├── [ 48] models--openai--clip-vit-large-patch14 │   ├── [ 6] blobs │   ├── [ 18] refs │   │   └── [ 40] main │   └── [ 54] snapshots │   └── [ 121] 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff │   ├── [ 4519] config.json │   ├── [ 524619] merges.txt │   ├── [ 389] special_tokens_map.json │   ├── [ 905] tokenizer_config.json │   └── [ 961143] vocab.json └── [ 1] version.txt

5 directories, 7 files

youngjack86 avatar May 08 '23 06:05 youngjack86

image

Could you please explain how the file structure on Hugging Face corresponds with your file structure? How should I write the version.txt, refs, and snapshots?Thanks.

Make sure that you can get access to https://huggingface.co/ normally since the site-packages/transformers/tokenization_utils_base.py script will call site-packages/transformers/utils/hub.py to invoke site-packages/huggingface_hub/file_download.py and commit the model download. Any timeout error may cause failures. Please note the .cache directory under the working and user home directory. I fixed this under Linux system by manually creating the directories and files (bcz i could not get access to huggingface from my Linux machine 😢) . The directories for me are as follows: ~/.cache/huggingface/hub/ ├── [ 48] models--openai--clip-vit-large-patch14 │   ├── [ 6] blobs │   ├── [ 18] refs │   │   └── [ 40] main │   └── [ 54] snapshots │   └── [ 121] 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff │   ├── [ 4519] config.json │   ├── [ 524619] merges.txt │   ├── [ 389] special_tokens_map.json │   ├── [ 905] tokenizer_config.json │   └── [ 961143] vocab.json └── [ 1] version.txt

5 directories, 7 files

g711ab avatar Dec 26 '23 02:12 g711ab