SD-CN-Animation
SD-CN-Animation copied to clipboard
txt2vid error on Apple Silicon Mac
Traceback (most recent call last):
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1299, in process_api
result = await self.call_function(
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1036, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 488, in async_iteration
return next(iterator)
File "/Users/hein/stable-diffusion-webui/extensions/SD-CN-Animation/scripts/base_ui.py", line 123, in process
yield from txt2vid.start_process(*args)
File "/Users/hein/stable-diffusion-webui/extensions/sd-cn-animation/scripts/core/txt2vid.py", line 76, in start_process
FloweR_load_model(args_dict['width'], args_dict['height'])
File "/Users/hein/stable-diffusion-webui/extensions/sd-cn-animation/scripts/core/txt2vid.py", line 51, in FloweR_load_model
FloweR_model.load_state_dict(torch.load(model_path))
File "/Users/hein/stable-diffusion-webui/modules/safe.py", line 107, in load
return load_with_extra(filename, extra_handler=global_extra_handler, *args, **kwargs)
File "/Users/hein/stable-diffusion-webui/modules/safe.py", line 152, in load_with_extra
return unsafe_torch_load(filename, *args, **kwargs)
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1213, in load
dispatch[key[0]](self)
File "/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "/Users/hein/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Gives me this error after generating the first frame just fine. Is this just not supported on M1/M2 Macs, or should I change some settings?
I get the same thing. Also on M1
same here. welsh runs with the cpu/skip coda flags mentioned, so can the plugin do the same?
look for this File "/stable-diffusion-webui/extensions/sd-cn-animation/scripts/core/txt2vid.py", @ line 51,in **FloweR_load_model FloweR_model.load_state_dict(torch.load(model_path))**change that to > FloweR_model.load_state_dict(torch.load(model_path, map_location="cpu"))
That fixed it for me, but still waiting only at frame 4 of the 40 :-D
look for this File "/stable-diffusion-webui/extensions/sd-cn-animation/scripts/core/txt2vid.py", @ line 51,in **FloweR_load_model FloweR_model.load_state_dict(torch.load(model_path))**change that to > FloweR_model.load_state_dict(torch.load(model_path, map_location="cpu"))
That fixed it for me, but still waiting only at frame 4 of the 40 :-D
This does work, although using the CPU to render isn't really as much a fix as it is a workaround.
In vid2vid mode, the same issue could happened. but I tried to fixe it by change the code in flow_utils.py.
At 55 line in flow_utils.py
change "RAFT_model.load_state_dict(torch.load(args.model))" to "RAFT_model.load_state_dict(torch.load(args.model, map_location=device))".
it works, but the performance is very bad.
I get the same error on M2 too. I make changes mentioned above one by one and together - but I still get only one frame and this error when I make the changes: "An exception occurred while trying to process the frame: Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same"