transformers
transformers copied to clipboard
Fix state dict loading via symlink on windows
What does this PR do?
I ran into an issue trying to run this on Windows 10 (via Git Bash, in a python 3.9.12 Conda environment, deps installed via pip). My requirements.txt included below for completeness.
I tried running an example of SD 2 from the docs
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "stabilityai/stable-diffusion-2-base"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "High quality photo of an astronaut riding a horse in space"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("astronaut.png")
And kept getting output like this:
schmavery ~/git/sd-test $ python test.py
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Downloading pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 681M/681M [00:16<00:00, 41.8MB/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [00:16<00:00, 1.38s/it]
Traceback (most recent call last):
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 417, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 251, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\schmavery\git\sd-test\test.py", line 5, in <module>
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 2431, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 420, in load_state_dict
with open(checkpoint_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
I did some poking around and realized that C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin is a symlink to another file in C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs.
Some searching online revealed some issues with python loading files via symlink in Windows, mostly due to Window's funny handling of symlinks. I tried adding a call to os.path.realpath to resolve the path before opening the file, and that solved the problem!
I thought I'd post this here in case it helps anyone.
requirements.txt:
accelerate==0.17.1
brotlipy==0.7.0
certifi @ file:///C:/b/abs_85o_6fm0se/croot/certifi_1671487778835/work/certifi
cffi @ file:///C:/b/abs_49n3v2hyhr/croot/cffi_1670423218144/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
conda==23.1.0
conda-package-handling @ file:///C:/b/abs_fcga8w0uem/croot/conda-package-handling_1672865024290/work
conda_package_streaming @ file:///C:/b/abs_0e5n5hdal3/croot/conda-package-streaming_1670508162902/work
cryptography @ file:///C:/b/abs_8ecplyc3n2/croot/cryptography_1677533105000/work
diffusers==0.14.0
filelock==3.10.0
huggingface-hub==0.13.2
idna @ file:///C:/b/abs_bdhbebrioa/croot/idna_1666125572046/work
importlib-metadata==6.0.0
Jinja2==3.1.2
MarkupSafe==2.1.2
menuinst @ file:///C:/ci/menuinst_1631733438520/work
mpmath==1.3.0
mypy-extensions==1.0.0
networkx==3.0
numpy==1.24.2
packaging==23.0
Pillow==9.4.0
pluggy @ file:///C:/ci/pluggy_1648024580010/work
psutil==5.9.4
pycosat @ file:///C:/b/abs_4b1rrw8pn9/croot/pycosat_1666807711599/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyOpenSSL @ file:///C:/b/abs_552w85x1jz/croot/pyopenssl_1677607703691/work
pyre-extensions==0.0.23
PySocks @ file:///C:/ci/pysocks_1605307512533/work
pywin32==305.1
PyYAML==6.0
regex==2022.10.31
requests @ file:///C:/ci/requests_1657735342357/work
ruamel.yaml @ file:///C:/b/abs_30ee5qbthd/croot/ruamel.yaml_1666304562000/work
ruamel.yaml.clib @ file:///C:/b/abs_aarblxbilo/croot/ruamel.yaml.clib_1666302270884/work
sympy==1.11.1
tokenizers==0.13.2
toolz @ file:///C:/b/abs_cfvk6rc40d/croot/toolz_1667464080130/work
torch==1.13.1+cu117
torchaudio==0.13.1+cu117
torchvision==0.14.1+cu117
tqdm @ file:///C:/b/abs_0axbz66qik/croots/recipe/tqdm_1664392691071/work
transformers==4.27.1
typing-inspect==0.8.0
typing_extensions==4.5.0
urllib3 @ file:///C:/b/abs_9bcwxczrvm/croot/urllib3_1673575521331/work
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306162074/work
wincertstore==0.2
xformers==0.0.16
zipp==3.15.0
zstandard==0.19.0
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the contributor guideline, Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
^^ This is such a small change that it shouldn't affect any docs/tests I think
Who can review?
Looks like @sgugger and @stas00 were the last to touch this area in the file, though it wasn't particularly recently. I wonder if some change was made in how the models are cached that could have caused this.. π€·
My original local fix just changed the torch load to torch.load(os.path.realpath(checkpoint_file_realpath), map_location="cpu"), but this seems like it might catch a couple more cases. I considered just overriding the checkpoint_file variable to point to the realpath but I thought that might have made the error messages less clear.
cc @Wauplin looks like something that should be in huggingface_hub (it it's not already).
The documentation is not available anymore as the PR was closed or merged.
Aie, this is a real problem I think. In huggingface_hub we return a path to the snapshots/ folder that is indeed a symlink to a file in the blobs/ folder. In the case of a hf_hub_download, I would be fine with doing a os.path.realpath before returning the path but that would still be an issue when doing snapshot_download.
The point of having a snapshots/ folder as we did is to provide the same file structure as in the repo for third-party libraries. But if Windows has a "funny way to handle symlinks" by not following them, I'm afraid huggingface_hub can't do anything about it except really changing the cache structure.
What I'm wondering here is why is has not been discovered before. @Schmavery would it be possible that you first ran a script in developer mode/as admin that have cached files using symlinks and you are now re-running the script in "normal" mode which result in not being able to follow symlinks? (for the record, we already had some issues with symlinks on windows and decided to duplicate files for non-dev non-admin users)
cc @LysandreJik @julien-c about the cache-system design
@Wauplin thanks for the quick reply!
I'm also curious why I'm the first to run into this, though at this point I'm used to things not working in Windows because of all the different ways things can be set up!
I don't think I ran anything as admin. I'm happy to run whatever command you need to get more info about the setup, but from some basic ls it looks like the permissions/ownership is as I might have expected.
schmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/
total 4
drwxr-xr-x 1 schmavery 0 Mar 17 08:26 blobs
drwxr-xr-x 1 schmavery 0 Mar 16 22:33 refs
drwxr-xr-x 1 schmavery 0 Mar 16 22:33 snapshots
schmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/
total 4
drwxr-xr-x 1 schmavery 0 Mar 16 22:33 1cb61502fc8b634cdb04e7cd69e06051a728bedf
schmavery ~/git/sd-test $ ls -lh ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/
total 2.5G
-rw-r--r-- 1 schmavery 160M Mar 16 22:33 11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030
-rw-r--r-- 1 schmavery 607 Mar 16 22:33 14bcdff46ade71e94221b696cefbad2382223370
-rw-r--r-- 1 schmavery 1.7G Mar 16 22:35 34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650
-rw-r--r-- 1 schmavery 1.1M Mar 16 22:33 469be27c5c010538f845f518c4f5e8574c78f7c8
-rw-r--r-- 1 schmavery 340 Mar 16 22:33 4a37db2129e08cb00670e652398a8f3960d97d0e
-rw-r--r-- 1 schmavery 513K Mar 16 22:33 76e821f1b6f0a9709293c3b6b51ed90980b3166b
-rw-r--r-- 1 schmavery 905 Mar 16 22:33 9e3e87514708d0a2b44abfa0096ec14802862f5d
-rw-r--r-- 1 schmavery 511 Mar 16 22:33 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d
-rw-r--r-- 1 schmavery 629 Mar 16 22:33 a08e9e082e6ab9044bdd2926092ce2e4f33d2272
-rw-r--r-- 1 schmavery 460 Mar 16 22:33 ae0c5be6f35217e51c4c000fd325d8de0294e99c
-rw-r--r-- 1 schmavery 820 Mar 16 22:33 e966b0b8955e8c66a0717acb2ce5041274d7c60a
-rw-r--r-- 1 schmavery 650M Mar 17 08:26 f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
Hi @Schmavery, thanks for reporting this.
I sorry that this bug has being introduced recently. It seems that Windows has issues following absolute symlinks in some cases. It has been reported in https://github.com/huggingface/huggingface_hub/issues/1398, https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228 (and mentioned in https://github.com/huggingface/huggingface_hub/issues/1396). I'll provide a quick ASAP.
@Schmavery could you please retry using huggingface_hub==0.13.3? It should fix your problem. Before that you need to delete your folder "~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/" to delete the existing (non-working) symlinks.
If the issue persists, please let me know.
@Wauplin I just tried your new version and something still doesn't seem to be working, though it seems like it's something else now.
The relative symlink is being created, but the blob that it is supposed to be pointing to is missing from the blobs folder.
More specifically, I get this error:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
And then looking around on disk I see this:
schmavery ~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin
lrwxrwxrwx 1 schmavery 79 Mar 20 10:05 'C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
schmavery ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs $ ls
11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c
14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a
34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272
It seems the blob starting with f2a06cf32c is nowhere to be found. If you think this is an unrelated problem, I'm happy to open another issue (on the huggingface_hub repo, I'd imagine)
Hi @Schmavery, maybe let's continue here for now. Could you delete entirely the ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base folder and try again? I tested your script in a colab notebook using the latest version and it worked for me: https://colab.research.google.com/drive/1xYy-3Q5hXptZ4TKef8kP7EeeSYiUISpa?usp=sharing
@Wauplin with huggingface-hub==0.13.3 installed, I deleted the whole ~/.cache/huggingface folder and ran the script in the initial post and got this as the full output:
schmavery ~/git/sd-test $ python repro.py
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Downloading (β¦)p16/model_index.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 511/511 [00:00<00:00, 170kB/s]
Downloading (β¦)okenizer_config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 820/820 [00:00<00:00, 51.4kB/s]
Downloading (β¦)cial_tokens_map.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 460/460 [00:00<00:00, 51.1kB/s]
Downloading (β¦)cheduler_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 340/340 [00:00<00:00, 113kB/s]
Downloading (β¦)edf/unet/config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 905/905 [00:00<00:00, 82.3kB/s]
Downloading (β¦)_encoder/config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 629/629 [00:00<00:00, 57.2kB/s]
Downloading (β¦)tokenizer/merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 525k/525k [00:00<00:00, 5.04MB/s]
Downloading (β¦)tokenizer/vocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.06M/1.06M [00:00<00:00, 6.42MB/s]
Downloading (β¦)bedf/vae/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 607/607 [00:00<00:00, 202kB/s]
Downloading (β¦)on_pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 167M/167M [00:05<00:00, 32.0MB/s]
Downloading pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 681M/681M [00:17<00:00, 39.8MB/s]
Downloading (β¦)on_pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.73G/1.73G [00:43<00:00, 39.8MB/s]
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [00:44<00:00, 3.73s/it]
Traceback (most recent call last):n: 40%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 692M/1.73G [00:16<00:27, 37.2MB/s]
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dictββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.73G/1.73G [00:43<00:00, 59.0MB/s]
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__
super(_open_file, self).__init__(open(name, mode))
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\schmavery\git\sd-test\repro.py", line 5, in <module>
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 2429, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 418, in load_state_dict
with open(checkpoint_file) as f:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
The snapshot file still points to a blob starting with f2a06cf32cf585d03b which doesn't exist in the blobs folder.
@Schmavery sorry that you are experiencing this. I'm making more tests on Windows on my side. Could you tell if you enabled developer mode on your laptop? And can you run huggingface-cli env and copy-paste this output here please? Just in case it gives me some hint on what is happening.
@Wauplin No problem, thanks for the help! The crazy thing is that this seemed to all be working last week (when using my realpath patch), but when I ran it this morning after the weekend, I had this issue, even after a clean reinstall of all the packages. I thought maybe there could have been some problematic update to the model itself but if it's running fine for you then I guess that's not it.
Looks like developer mode is turned on

Here's the output:
schmavery ~/git/sd-test $ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.13.3
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\schmavery\.cache\huggingface\token
- Has saved token ?: False
- Configured git credential helpers: manager-core
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.13.1+cu117
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: C:\Users\schmavery\.cache\huggingface\hub
- HUGGINGFACE_ASSETS_CACHE: C:\Users\schmavery\.cache\huggingface\assets
- HF_TOKEN_PATH: C:\Users\schmavery\.cache\huggingface\token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
Thanks for the information @Schmavery . Unfortunately I'm still not able to reproduce your issue. It's good that you have developer mode activated btw (otherwise you wouldn't have symlinks at all and files would be duplicated in the cache).
Can we try something else?:
- Delete the
'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\'folder (or'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base'if you want to keep other ones) - Install
huggingface_hub==0.12.1. We had some issues with the 0.13 release and I'd like to be sure if the bug you are facing existed before or not. - Rerun the script with debug logging enabled i.e.
# Add those 2 lines at the beginning of your script:
from huggingface_hub.utils.logging import set_verbosity_debug
set_verbosity_debug()
# Same script as before
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "stabilityai/stable-diffusion-2-base"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "High quality photo of an astronaut riding a horse in space"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("astronaut.png")
@Wauplin FWIW I just tried it with runwayml/stable-diffusion-v1-5 to see if a different model might work, but got a very similar problem:
schmavery ~/git/sd-test $ python repro.py
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Downloading (β¦)ain/model_index.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 543/543 [00:00<00:00, 136kB/s]
Downloading (β¦)rocessor_config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 342/342 [00:00<00:00, 85.5kB/s]
Downloading (β¦)cheduler_config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 308/308 [00:00<00:00, 68.8kB/s]
Downloading (β¦)_checker/config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.72k/4.72k [00:00<00:00, 1.33MB/s]
Downloading (β¦)cial_tokens_map.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 472/472 [00:00<00:00, 157kB/s]
Downloading (β¦)_encoder/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 617/617 [00:00<00:00, 154kB/s]
Downloading (β¦)tokenizer/merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 525k/525k [00:00<00:00, 4.14MB/s]
Downloading (β¦)okenizer_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 806/806 [00:00<00:00, 403kB/s]
Downloading (β¦)819/unet/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 743/743 [00:00<00:00, 248kB/s]
Downloading (β¦)d819/vae/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 547/547 [00:00<00:00, 182kB/s]
Downloading (β¦)tokenizer/vocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.06M/1.06M [00:00<00:00, 3.91MB/s]
Downloading (β¦)on_pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 335M/335M [00:23<00:00, 14.4MB/s]
Downloading pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 492M/492M [00:32<00:00, 15.2MB/s]
Downloading pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.22G/1.22G [00:58<00:00, 20.6MB/s]
Downloading (β¦)on_pytorch_model.bin: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.44G/3.44G [01:36<00:00, 35.7MB/s]
Fetching 15 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 15/15 [01:36<00:00, 6.46s/it]
Traceback (most recent call last):%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.22G/1.22G [00:58<00:00, 19.2MB/s]
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 101, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__
super(_open_file, self).__init__(open(name, mode))
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\schmavery\git\sd-test\repro.py", line 4, in <module>
pipe = StableDiffusionPipeline.from_pretrained(
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 563, in from_pretrained
state_dict = load_state_dict(model_file, variant=variant)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 106, in load_state_dict
with open(checkpoint_file) as f:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin'
(venv) (base) 11:54:56 schmavery@DESKTOP-ML11APV:~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin
lrwxrwxrwx 1 schmavery 79 Mar 20 11:53 'C:\Users\schmavery\.cache\huggingface\hub\models--runwayml--stable-diffusion-v1-5\snapshots\39593d5650112b4cc580433f6b0435385882d819\vae\diffusion_pytorch_model.bin' -> ../../../blobs/1b134cded8eb78b184aefb8805b6b572f36fa77b255c483665dda931fa0130c5
schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/
models--runwayml--stable-diffusion-v1-5/ models--stabilityai--stable-diffusion-2-base/ version.txt version_diffusers_cache.txt
schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/blobs/
193490b58ef62739077262e833bf091c66c29488058681ac25cf7df3d8190974 4d3e873ab5086ad989f407abd50fdce66db8d657 5dbd88952e7e521aa665e5052e6db7def3641d03 82d05b0e688d7ea94675678646c427907419346e
1a02ee8abc93e840ffbcb2d68b66ccbcb74b3ab3 5294955ff7801083f720b34b55d0f1f51313c5c5 6866dceb3a870b077eb970ecf702ce4e1a83b934 c7da0e21ba7ea50637bee26e81c220844defdf01aafca02b2c42ecdadb813de4
2c2130b544c0c5a72d5d00da071ba130a9800fb2 55d78924fee13e4220f24320127c5f16284e13b9 76e821f1b6f0a9709293c3b6b51ed90980b3166b
469be27c5c010538f845f518c4f5e8574c78f7c8 5ba7bf706515bc60487ad0e1816b4929b82542d6 770a47a9ffdcfda0b05506a7888ed714d06131d60267e6cf52765d61cf59fd67
I wonder if it's possible that the hash used in the symlink could be wrong under some circumstances.
@Schmavery not sure you saw it but could you try my suggestion from https://github.com/huggingface/transformers/pull/22228#issuecomment-1476499630? Thanks in advance
Oops, missed your message, running that now.
I assume you meant huggingface_hub==0.12.1 rather than huggingface==0.12.1 but lmk if that's wrong (the latter gave me an error when trying to pip install)
Ah, yes of course. huggingface_hub==0.12.1 is the one I meant
@Wauplin
schmavery ~/git/sd-test $ rm -rf ~/.cache/huggingface/hub/
schmavery ~/git/sd-test $ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.12.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\schmavery\.cache\huggingface\token
- Has saved token ?: False
- Configured git credential helpers: manager-core
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.13.1+cu117
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: C:\Users\schmavery\.cache\huggingface\hub
- HUGGINGFACE_ASSETS_CACHE: C:\Users\schmavery\.cache\huggingface\assets
- HF_HUB_OFFLINE: False
- HF_TOKEN_PATH: C:\Users\schmavery\.cache\huggingface\token
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
schmavery ~/git/sd-test $ python repro.py
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json to C:\Users\schmavery\.cache\huggingface\hub\tmppiuhc6qi
Downloading (β¦)p16/model_index.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 511/511 [00:00<00:00, 170kB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\model_index.json
Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp376tmhv9
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/special_tokens_map.json to C:\Users\schmavery\.cache\huggingface\hub\tmp9onvvyfj
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp88p6fmgk
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpbiq7cjj2
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json to C:\Users\schmavery\.cache\huggingface\hub\tmp7skqyuqq
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmpvrthjk23
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt to C:\Users\schmavery\.cache\huggingface\hub\tmpyi6kdwbo
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json to C:\Users\schmavery\.cache\huggingface\hub\tmpy9q44g25
Downloading (β¦)cial_tokens_map.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 460/460 [00:00<00:00, 115kB/s]
Downloading (β¦)"pytorch_model.bin";: 0%| | 0.00/681M [00:00<?, ?B/s]
Downloading (β¦)cial_tokens_map.json: 0%| | 0.00/460 [00:00<?, ?B/s]
Downloading (β¦)cheduler_config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 340/340 [00:00<00:00, 68.0kB/s]bDownloading (β¦)_encoder/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 629/629 [00:00<00:00, 210kB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\4a37db2129e08cb00670e652398a8f3960d97d0eson: 0%| | 0.00/629 [00:00<?, ?B/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\a08e9e082e6ab9044bdd2926092ce2e4f33d2272
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\ae0c5be6f35217e51c4c000fd325d8de0294e99c from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\special_tokens_map.json
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\4a37db2129e08cb00670e652398a8f3960d97d0e from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\scheduler\scheduler_config.json
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\a08e9e082e6ab9044bdd2926092ce2e4f33d2272 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\config.json
Downloading (β¦)edf/unet/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 905/905 [00:00<00:00, 226kB/s]
Downloading (β¦)edf/unet/config.json: 0%| | 0.00/905 [00:00<?, ?B/s]
Downloading (β¦)okenizer_config.json: 0%| | 0.00/820 [00:00<?, ?B/s]sDownloading (β¦)okenizer_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 820/820 [00:00<00:00, 164kB/s]0storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\e966b0b8955e8c66a0717acb2ce5041274d7c60a
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9e3e87514708d0a2b44abfa0096ec14802862f5d from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\unet\config.json
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\e966b0b8955e8c66a0717acb2ce5041274d7c60a from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\tokenizer_config.json
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpzk2qle5p
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpj5ly573o | 0.00/525k [00:00<?, ?B/s]
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp43prkgdv | 0.00/1.06M [00:00<?, ?B/s]
Downloading (β¦)tokenizer/merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 525k/525k [00:00<00:00, 4.64MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\76e821f1b6f0a9709293c3b6b51ed90980b3166bzer/merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 525k/525k [00:00<00:00, 4.69MB/s]
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\76e821f1b6f0a9709293c3b6b51ed90980b3166b from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\merges.txt
Downloading (β¦)tokenizer/vocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.06M/1.06M [00:00<00:00, 6.97MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\469be27c5c010538f845f518c4f5e8574c78f7c8
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\469be27c5c010538f845f518c4f5e8574c78f7c8 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\vocab.json
Downloading (β¦)bedf/vae/config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 607/607 [00:00<00:00, 152kB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\14bcdff46ade71e94221b696cefbad2382223370edf/vae/config.json: 0%| | 0.00/607 [00:00<?, ?B/s]
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\14bcdff46ade71e94221b696cefbad2382223370 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\vae\config.json
Downloading (β¦)"pytorch_model.bin";: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 681M/681M [00:07<00:00, 92.9MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 | 73.4M/1.73G [00:06<02:07, 13.0MB/s]
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin
Downloading (β¦)_pytorch_model.bin";: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 167M/167M [00:10<00:00, 15.8MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030ββββββ | 189M/1.73G [00:10<00:44, 34.3MB/s]
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\vae\diffusion_pytorch_model.bin
Downloading (β¦)_pytorch_model.bin";: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.73G/1.73G [00:51<00:00, 33.4MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.73G/1.73G [00:51<00:00, 52.2MB/s]
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\unet\diffusion_pytorch_model.bin
Fetching 12 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [00:52<00:00, 4.38s/it]
Traceback (most recent call last):
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__
super(_open_file, self).__init__(open(name, mode))
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\schmavery\git\sd-test\repro.py", line 24, in <module>
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 2429, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 418, in load_state_dict
with open(checkpoint_file) as f:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin'
schmavery ~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin
lrwxrwxrwx 1 schmavery 79 Mar 20 12:14 'C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/
11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c
14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a
34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272
@Wauplin Ok, doing some more investigation. When watching my filesystem during the install, I see the offending f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 in the blobs folder after it gets to the
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
But then at some point it gets deleted/disappears. Any idea what might be triggering that?
At this point I'm just running
from huggingface_hub.utils.logging import set_verbosity_debug
set_verbosity_debug()
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "stabilityai/stable-diffusion-2-base"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
@Schmavery thanks for trying the commands. The fact that it doesn't work on huggingface v0.12.1 makes me think that it's an issue specific to your setup, not something that was introduced recently. It's doesn't mean we should not find the root cause.
Maybe let's try to keep the test as minimal as possible:
# tested with huggingface_hub==0.12.1
from huggingface_hub.utils.logging import set_verbosity_debug
from huggingface_hub import hf_hub_download
from huggingface_hub.constants import HUGGINGFACE_HUB_CACHE
from pathlib import Path
import shutil
print("Deleting", HUGGINGFACE_HUB_CACHE)
shutil.rmtree(HUGGINGFACE_HUB_CACHE)
set_verbosity_debug()
path = Path(hf_hub_download(repo_id="stabilityai/stable-diffusion-2-base", filename="text_encoder/pytorch_model.bin", revision="fp16"))
print("hf_hub_download", path)
print("is_file", path.is_file())
print("is_symlink", path.is_symlink())
print("resolved", path.resolve())
print("resolved size", path.resolve().stat().st_size)
should output
Deleting C:\Users\Administrator\.cache\huggingface\hub
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\Users\Administrator\.cache\huggingface\hub\tmp9rxs8yls
Downloading (β¦)"pytorch_model.bin";: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 681M/681M [00:05<00:00, 115MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
creating pointer to C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin
hf_hub_download C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin
is_file True
is_symlink True
resolved C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
resolved size 680904225
Can you confirm that or is the blob file missing already ?
@Wauplin ok I can confirm that much works!
schmavery ~/git/sd-test $ python repro2.py
Deleting C:\Users\schmavery\.cache\huggingface\hub
downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmp8kmmcj6w
Downloading (β¦)"pytorch_model.bin";: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 681M/681M [00:05<00:00, 114MB/s]
storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin
hf_hub_download C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin
is_file True
is_symlink True
resolved C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730
resolved size 680904225
Ok, that's already a good news. Could you try to load this path from pytorch? (adding the following line to the previous script).
import torch
# try to load from symlink directly
state_dict = torch.load(path)
# or try to load from resolved symlink
state_dict = torch.load(path.resolve())
and if that doesn't work, at least try to read the binary file:
with open(path, "rb") as f:
print("content length", len(f.read()), "(read from file)")
# or
with open(path.resolve(), "rb") as f:
print("content length", len(f.read()), "(read from file)")
Ok. I think I've finally figured out what's going on. @Wauplin Thank you so much for your help in debugging

It looks like somehow the model is triggering some trojan detector in Windows Defender. Looks like a couple other people have run into the issue too: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8584
Probably just a false positive but I might try and figure out how to use the safetensor version of the stabilityai/stable-diffusion-2-base just in case.
Thanks again for all the help -- seems like this PR is probably not needed now that huggingface_hub is using relative symlinks.
@Schmavery Very glad that you finally figured out what's going on! Hope this will help other users switching to safetensors as well :+1: :)