stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: LDSR is broken after add support SD2

Open dill-shower opened this issue 2 years ago • 11 comments

Is there an existing issue for this?

  • [x] I have searched the existing issues and checked the recent builds/commits

What happened?

LSDR support is bronen in webui

Steps to reproduce the problem

  1. Go to extras
  2. Send any image and use LDSR upscaler
  3. Show the error

What should have happened?

Lsdr working.

Commit where the problem happens

b5050ad2071644f7b4c99660dc66a8a95136102f

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--xformers

Additional information, context and logs

Loading model from C:\diffusion\stable-diffusion-webui\models\LDSR\model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 113.62 M params. Keeping EMAs of 308. Error completing request Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=768x768 at 0x1BC7E25BDF0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 0, False) {} Traceback (most recent call last): File "C:\diffusion\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, **kwargs)) File "C:\diffusion\stable-diffusion-webui\webui.py", line 56, in f res = func(*args, **kwargs) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 187, in run_extras image, info = op(image, info) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 148, in run_upscalers_blend res = upscale(image, *upscale_args) File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 116, in upscale res = upscaler.scaler.upscale(image, resize, upscaler.data_path) File "C:\diffusion\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale img = self.do_upscale(img, selected_model) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model.py", line 54, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 87, in super_resolution model = self.load_model_from_config(half_attention) File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 25, in load_model_from_config model = instantiate_from_config(config.model) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in init self.instantiate_first_stage(first_stage_config) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage model = instantiate_from_config(config) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str return getattr(importlib.import_module(module, package=None), cls) AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'

dill-shower avatar Nov 26 '22 19:11 dill-shower

The same issue.

Renaldas111 avatar Nov 26 '22 20:11 Renaldas111

Can confirm here as well. Same error.

leppie avatar Nov 26 '22 21:11 leppie

Confirm, the same issue.

ebziw avatar Nov 27 '22 16:11 ebziw

Same issue here

Blavkm avatar Nov 28 '22 15:11 Blavkm

Does not work:

  • nor for SD scale up script,
  • nor for scaling at Extras tab.

mpolsky avatar Nov 28 '22 18:11 mpolsky

What fixed it for me locally was copying the contents of repositories/stable-diffusion/ldm/models/autoencoder.py into repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py. Obviously not an ideal solution.

DanielWeiner avatar Nov 28 '22 20:11 DanielWeiner

What fixed it for me locally was copying the contents of repositories/stable-diffusion/ldm/models/autoencoder.py into repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py. Obviously not an ideal solution.

I did it and still not fixed, hope we can get a fix soon.

Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\img2img.py", line 137, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 317, in run
    processed = script.run(p, *script_args)
  File "C:\Users\ZeroCool22\Desktop\Auto\scripts\sd_upscale.py", line 39, in run
    img = upscaler.scaler.upscale(init_img, 2, upscaler.data_path)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\upscaler.py", line 64, in upscale
    img = self.do_upscale(img, selected_model)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model.py", line 54, in do_upscale
    return ldsr.super_resolution(img, ddim_steps, self.scale)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 87, in super_resolution
    model = self.load_model_from_config(half_attention)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 25, in load_model_from_config
    model = instantiate_from_config(config.model)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in __init__
    self.instantiate_first_stage(first_stage_config)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage
    model = instantiate_from_config(config)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str
    return getattr(importlib.import_module(module, package=None), cls)
AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'

ZeroCool22 avatar Nov 28 '22 23:11 ZeroCool22

same

KGUY1 avatar Nov 29 '22 07:11 KGUY1

same

aiforpresident avatar Nov 29 '22 08:11 aiforpresident

Same issue

paolodalprato avatar Nov 29 '22 12:11 paolodalprato

I've created a new PR to repair this functionality. Can someone please give it a test #5216

wywywywy avatar Nov 29 '22 17:11 wywywywy

heartbroken. just want her back bros

RoyHammerlin avatar Dec 01 '22 19:12 RoyHammerlin

The trouble is that Stability AI removed all references of VQ from their repo, leaving only IK, and LDSR depends on VQ.

My PR will get it working again at the cost of significant VRAM usage increase.

Sometimes I think that maybe we should give LDSR up and put effort into getting the SD 2.0 4x upscaler working instead, seeing as the SD 2.0 4x upscaler is the spiritual successor to LDSR.

Having said that, it looks like there's something wrong with the current version of SD 2.0 4x upscaler and it has excessive VRAM requirements.

wywywywy avatar Dec 01 '22 22:12 wywywywy

The PR was merged a few hours ago but on my 2080ti (11GB) I can't use even 2x LDSR anymore due to "out of memory"

Ladypoly avatar Dec 03 '22 15:12 Ladypoly

Same here (3080ti) RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

paolodalprato avatar Dec 03 '22 16:12 paolodalprato

Same here (3080ti) RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Same

dill-shower avatar Dec 03 '22 16:12 dill-shower

LDSR is also broken for me. The code works, but as of a few commits ago I cannot use it due to OOM error. This used to work fine with --medvram enabled

Viexi avatar Dec 03 '22 16:12 Viexi

sad day :( same issue

sampanes avatar Dec 03 '22 19:12 sampanes

The PR is only to make it possible again, so that the next lot of work can be carried out.

The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!

wywywywy avatar Dec 03 '22 19:12 wywywywy

The PR is only to make it possible again, so that the next lot of work can be carried out.

The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!

Thanks for your effort. I could "make it work" only by scaling to 512X512 😅. Otherwise, it's:

(GPU 0; 12.00 GiB total capacity; 10.80 GiB already allocated; 0 bytes free; 10.85 GiB reserved in total by PyTorch)

websubst avatar Dec 03 '22 19:12 websubst

I can't even do a 512x512 upscale, running out of vram on a 3090ti

Maximum is 256x256 even with xformers enabled

kalkal11 avatar Dec 03 '22 22:12 kalkal11

There are 3 jobs remaining to get it fully functional (and better than before).

  1. Make it work with Half precision
  2. Make it work with optimisation (e.g. Xformers)
  3. Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)

We could do with some help from an actual ML engineer, so if you can help, please chip in!.

Rather than a regular dev with only surface level understanding like myself. I tried 1 & 2 but couldn't get it to work.

wywywywy avatar Dec 03 '22 23:12 wywywywy

I've created PR #5415 to apply the point 3 above -

Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)

On my setup the VRAM usage has now gone back down to 5GB from 17GB. Can someone give it a test please?

wywywywy avatar Dec 04 '22 14:12 wywywywy

I also got it to apply Xformers optimization (through modules.sd_hijack.model_hijack.hijack()) but it made no difference whatsoever to the it/s nor the VRAM usage. Not sure why that is.

Anyone any ideas?

wywywywy avatar Dec 04 '22 14:12 wywywywy

Just to get in line: same here! GeForce 3090TI with 24GB of VRAM, still states out of memory. What's going on?

HWiese1980 avatar Dec 05 '22 19:12 HWiese1980

@wywywywy i applied your pr 5415 manually and everything seems to work great. it's a bit slow when choosed upscaler 1 and 2 ldsr and scale by 4, but that's for sure with my 2070s 8gb each process took ~8:20 min with 100 timesteps (5.01s/it) set COMMANDLINE_ARGS=--api --xformers applied input img size 768x768px

dnl13 avatar Dec 06 '22 20:12 dnl13

Thanks for testing. Is the total time taken roughly the same as how it worked in the past?

wywywywy avatar Dec 06 '22 21:12 wywywywy

the whole process took about 20min in total. unfortunately I can't say anything about the past since I'm only on it and it hasn't worked since then

dnl13 avatar Dec 06 '22 21:12 dnl13

I think it's probably about right. Even on my 3090, upscaling a 512x512 by 4x takes a while.

The next PR will have optimisations (like Xformers) enabled, and that might help you a bit.

wywywywy avatar Dec 06 '22 21:12 wywywywy

The above PR #5415 has now been merged. So the memory usage should go back to previous working level now.

I've also created a new PR #5586 to further improve it - it allows caching, optimization (e.g. Xformers), and Channels Last memory format. Please give it a test if you have time.

I could not get --medvram and --lowvram to work because how different the LDSR model is to the SD models.

wywywywy avatar Dec 10 '22 14:12 wywywywy