fast-stable-diffusion icon indicating copy to clipboard operation
fast-stable-diffusion copied to clipboard

unsure of what to do now

Open nero187cha opened this issue 1 year ago • 2 comments

I'm getting this error on the colab notebook. It was working a day ago

Warning: caught exception 'No CUDA GPUs are available', memory monitor disabled Loading weights [4c86efd062] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/model.ckpt loading stable diffusion model: RuntimeError Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 111, in initialize modules.sd_models.load_model() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 383, in load_model state_dict = get_checkpoint_state_dict(checkpoint_info, timer) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 238, in get_checkpoint_state_dict res = read_state_dict(checkpoint_info.filename) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 219, in read_state_dict pl_sd = torch.load(checkpoint_file, map_location=map_location or shared.weight_load_location) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/safe.py", line 106, in load return load_with_extra(filename, extra_handler=global_extra_handler, *args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/safe.py", line 151, in load_with_extra return unsafe_torch_load(filename, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 1131, in _load result = unpickler.load() File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 1101, in persistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 1083, in load_tensor wrap_storage=re Untitled store_location(storage, location), File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 1052, in restore_location return default_restore_location(storage, map_location) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 215, in default_restore_location result = fn(storage, location) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) File "/usr/local/lib/python3.9/dist-packages/torch/serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Stable diffusion model failed to load, exiting

nero187cha avatar Mar 10 '23 14:03 nero187cha

Click "Runtime" on the navbar at the top and select "Change Runtime Type". Then, change "Hardware Accelerator" to GPU and save. This should solve your problem.

alpkabac avatar Mar 10 '23 14:03 alpkabac

It's already set as GPU, but I think I've used my free allowance

nero187cha avatar Mar 10 '23 14:03 nero187cha