stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
ERROR lora
Good afternoon. Saw Loras support in the comites, but most Loras still don't work.
ERROR lora diffusion_model.single_blocks.0.linear2.weight CUDA out of memory. Tried to allocate 180.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 20.53 GiB is allocated by PyTorch, and 2.40 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Traceback (most recent call last): File "C:\forge\webui\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "C:\forge\webui\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "C:\forge\webui\modules\processing.py", line 809, in process_images res = process_images_inner(p) File "C:\forge\webui\modules\processing.py", line 952, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\forge\webui\modules\processing.py", line 1323, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\forge\webui\modules\sd_samplers_kdiffusion.py", line 194, in sample sampling_prepare(self.model_wrap.inner_model.forge_objects.unet, x=x) File "C:\forge\webui\backend\sampling\sampling_function.py", line 356, in sampling_prepare memory_management.load_models_gpu( File "C:\forge\webui\backend\memory_management.py", line 571, in load_models_gpu loaded_model.model_load(model_gpu_memory_when_using_cpu_swap) File "C:\forge\webui\backend\memory_management.py", line 384, in model_load raise e File "C:\forge\webui\backend\memory_management.py", line 380, in model_load self.real_model = self.model.forge_patch_model(patch_model_to) File "C:\forge\webui\backend\patcher\base.py", line 291, in forge_patch_model weight = weight.to(**to_args) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 252.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 20.54 GiB is allocated by PyTorch, and 2.36 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF CUDA out of memory. Tried to allocate 252.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 20.54 GiB is allocated by PyTorch, and 2.36 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF