stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: LDSR is broken after add support SD2
Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
What happened?
LSDR support is bronen in webui
Steps to reproduce the problem
- Go to extras
- Send any image and use LDSR upscaler
- Show the error
What should have happened?
Lsdr working.
Commit where the problem happens
b5050ad2071644f7b4c99660dc66a8a95136102f
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
--xformers
Additional information, context and logs
Loading model from C:\diffusion\stable-diffusion-webui\models\LDSR\model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 113.62 M params. Keeping EMAs of 308. Error completing request Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=768x768 at 0x1BC7E25BDF0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 0, False) {} Traceback (most recent call last): File "C:\diffusion\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, **kwargs)) File "C:\diffusion\stable-diffusion-webui\webui.py", line 56, in f res = func(*args, **kwargs) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 187, in run_extras image, info = op(image, info) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 148, in run_upscalers_blend res = upscale(image, *upscale_args) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 116, in upscale res = upscaler.scaler.upscale(image, resize, upscaler.data_path) File "C:\diffusion\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale img = self.do_upscale(img, selected_model) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model.py", line 54, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 87, in super_resolution model = self.load_model_from_config(half_attention) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 25, in load_model_from_config model = instantiate_from_config(config.model) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in init self.instantiate_first_stage(first_stage_config) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage model = instantiate_from_config(config) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str return getattr(importlib.import_module(module, package=None), cls) AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'
The same issue.
Can confirm here as well. Same error.
Confirm, the same issue.
Same issue here
Does not work:
- nor for SD scale up script,
- nor for scaling at Extras tab.
What fixed it for me locally was copying the contents of repositories/stable-diffusion/ldm/models/autoencoder.py
into repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py
. Obviously not an ideal solution.
What fixed it for me locally was copying the contents of
repositories/stable-diffusion/ldm/models/autoencoder.py
intorepositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py
. Obviously not an ideal solution.
I did it and still not fixed, hope we can get a fix soon.
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\img2img.py", line 137, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 317, in run
processed = script.run(p, *script_args)
File "C:\Users\ZeroCool22\Desktop\Auto\scripts\sd_upscale.py", line 39, in run
img = upscaler.scaler.upscale(init_img, 2, upscaler.data_path)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model.py", line 54, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 87, in super_resolution
model = self.load_model_from_config(half_attention)
File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 25, in load_model_from_config
model = instantiate_from_config(config.model)
File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in __init__
self.instantiate_first_stage(first_stage_config)
File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage
model = instantiate_from_config(config)
File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'
same
same
Same issue
I've created a new PR to repair this functionality. Can someone please give it a test #5216
heartbroken. just want her back bros
The trouble is that Stability AI removed all references of VQ from their repo, leaving only IK, and LDSR depends on VQ.
My PR will get it working again at the cost of significant VRAM usage increase.
Sometimes I think that maybe we should give LDSR up and put effort into getting the SD 2.0 4x upscaler working instead, seeing as the SD 2.0 4x upscaler is the spiritual successor to LDSR.
Having said that, it looks like there's something wrong with the current version of SD 2.0 4x upscaler and it has excessive VRAM requirements.
The PR was merged a few hours ago but on my 2080ti (11GB) I can't use even 2x LDSR anymore due to "out of memory"
Same here (3080ti) RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Same here (3080ti) RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Same
LDSR is also broken for me. The code works, but as of a few commits ago I cannot use it due to OOM error. This used to work fine with --medvram enabled
sad day :( same issue
The PR is only to make it possible again, so that the next lot of work can be carried out.
The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!
The PR is only to make it possible again, so that the next lot of work can be carried out.
The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!
Thanks for your effort. I could "make it work" only by scaling to 512X512 😅. Otherwise, it's:
(GPU 0; 12.00 GiB total capacity; 10.80 GiB already allocated; 0 bytes free; 10.85 GiB reserved in total by PyTorch)
I can't even do a 512x512 upscale, running out of vram on a 3090ti
Maximum is 256x256 even with xformers enabled
There are 3 jobs remaining to get it fully functional (and better than before).
- Make it work with Half precision
- Make it work with optimisation (e.g. Xformers)
- Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)
We could do with some help from an actual ML engineer, so if you can help, please chip in!.
Rather than a regular dev with only surface level understanding like myself. I tried 1 & 2 but couldn't get it to work.
I've created PR #5415 to apply the point 3 above -
Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)
On my setup the VRAM usage has now gone back down to 5GB from 17GB. Can someone give it a test please?
I also got it to apply Xformers optimization (through modules.sd_hijack.model_hijack.hijack()
) but it made no difference whatsoever to the it/s nor the VRAM usage. Not sure why that is.
Anyone any ideas?
Just to get in line: same here! GeForce 3090TI with 24GB of VRAM, still states out of memory. What's going on?
@wywywywy
i applied your pr 5415 manually and everything seems to work great.
it's a bit slow when choosed upscaler 1 and 2 ldsr and scale by 4, but that's for sure with my 2070s 8gb
each process took ~8:20 min with 100 timesteps (5.01s/it)
set COMMANDLINE_ARGS=--api --xformers
applied
input img size 768x768px
Thanks for testing. Is the total time taken roughly the same as how it worked in the past?
the whole process took about 20min in total. unfortunately I can't say anything about the past since I'm only on it and it hasn't worked since then
I think it's probably about right. Even on my 3090, upscaling a 512x512 by 4x takes a while.
The next PR will have optimisations (like Xformers) enabled, and that might help you a bit.
The above PR #5415 has now been merged. So the memory usage should go back to previous working level now.
I've also created a new PR #5586 to further improve it - it allows caching, optimization (e.g. Xformers), and Channels Last memory format. Please give it a test if you have time.
I could not get --medvram
and --lowvram
to work because how different the LDSR model is to the SD models.