Moore-AnimateAnyone
Moore-AnimateAnyone copied to clipboard
Unable to load weights from checkpoint file
every time i run this I get the following error:
Traceback (most recent call last):
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 109, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1028, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1246, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 122, in load_state_dict
raise ValueError(
ValueError: Unable to locate the file ./pretrained_weights/stable-diffusion-v1-5/unet\diffusion_pytorch_model.bin which is necessary to load this pretrained model. Make sure you have saved the model properly.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 534, in predict
output = await route_utils.call_process_api(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1554, in process_api
result = await self.call_function(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1192, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\utils.py", line 659, in wrapper
response = f(*args, **kwargs)
File "C:\AI\AnimateAnyone\Moore-AnimateAnyone\app.py", line 52, in animate
reference_unet = UNet2DConditionModel.from_pretrained(
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 800, in from_pretrained
state_dict = load_state_dict(model_file, variant=variant)
File "C:\Users\henso\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 127, in load_state_dict
raise OSError(
OSError: Unable to load weights from checkpoint file for './pretrained_weights/stable-diffusion-v1-5/unet\diffusion_pytorch_model.bin' at './pretrained_weights/stable-diffusion-v1-5/unet\diffusion_pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
any solutions?
It seems you are missing binaray files of stable-diffusion-v1-5. Is this file stable-difusion-v1-5/unet/diffusion_pytorch_model.bin
exists in your directory?
Can it be used with
It seems you are missing binaray files of stable-diffusion-v1-5. Is this file
stable-difusion-v1-5/unet/diffusion_pytorch_model.bin
exists in your directory?
Can it be used with safetensors?
Thanks! I did put it there and it was named corectly but the file was 0 KB so i think i must have had an error while downloading
Hello @FantasticMrCat42 , we have updated a script to download weights automatically. You can try it~