stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: dimm sampler not working

Open bismark211 opened this issue 2 years ago • 6 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

dimm sampler not working

Steps to reproduce the problem

  1. Go to sd
  2. Press generate
  3. false

What should have happened?

generation

Commit where the problem happens

00dab8f10defbbda579a1bc89c8d4e972c58a20d

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers --api --precision full --no-half --vae-path "C:\stable-diffusion-webui\models\VAE\v1-5-pruned-emaonly.vae.pt"
git pull

List of extensions

no

Console logs

Traceback (most recent call last):
  File "C:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum.py", line 85, in run_deforum
    render_animation(args, anim_args, video_args, parseq_args, loop_args, root.animation_prompts, root)
  File "C:\stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts\deforum_helpers\render.py", line 339, in render_animation
    image = generate(args, anim_args, loop_args, root, frame_idx, sampler_name=scheduled_sampler_name)
  File "C:\stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts\deforum_helpers\generate.py", line 176, in generate
    processed = processing.process_images(p_txt)
  File "C:\stable-diffusion-webui\modules\processing.py", line 485, in process_images
    res = process_images_inner(p)
  File "C:\stable-diffusion-webui\modules\processing.py", line 627, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\stable-diffusion-webui\modules\processing.py", line 827, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\stable-diffusion-webui\modules\sd_samplers.py", line 289, in sample
    samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
  File "C:\stable-diffusion-webui\modules\sd_samplers.py", line 176, in launch_sampling
    return func()
  File "C:\stable-diffusion-webui\modules\sd_samplers.py", line 289, in <lambda>
    samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 97, in sample
    self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 42, in make_schedule
    ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 65, in make_ddim_sampling_parameters
    alphas = alphacums[ddim_timesteps]
IndexError: index 1000 is out of bounds for dimension 0 with size 1000

Additional information

No response

bismark211 avatar Jan 29 '23 10:01 bismark211

https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/00dab8f10defbbda579a1bc89c8d4e972c58a20d GIF 29 01 2023 13-54-30

mezotaken avatar Jan 29 '23 10:01 mezotaken

Can you explain what the command line arguments are telling me?

bismark211 avatar Jan 29 '23 11:01 bismark211

I have win 10 and rtx 4090

bismark211 avatar Jan 29 '23 11:01 bismark211

I don't have these lines of code 29-01-2023 13-29-38

bismark211 avatar Jan 29 '23 11:01 bismark211

It doesn’t start for me exactly at 110 steps, for example, it works at 80.

bismark211 avatar Jan 29 '23 14:01 bismark211

@bismark211 yep, that's what i expected to hear. it's just a bug that's been adressed many times before https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3917 and the PR that fixed it even mentions that specific step count. Wonder when it broke again?

mezotaken avatar Jan 29 '23 15:01 mezotaken

now generation has stopped on dpm++2m

Traceback (most recent call last): File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 126, in feed_token action, arg = states[state][token.type] KeyError: '$END'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "C:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum.py", line 85, in run_deforum render_animation(args, anim_args, video_args, parseq_args, loop_args, root.animation_prompts, root) File "C:\stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts\deforum_helpers\render.py", line 339, in render_animation image = generate(args, anim_args, loop_args, root, frame_idx, sampler_name=scheduled_sampler_name) File "C:\stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts\deforum_helpers\generate.py", line 197, in generate processed = processing.process_images(p) File "C:\stable-diffusion-webui\modules\processing.py", line 485, in process_images res = process_images_inner(p) File "C:\stable-diffusion-webui\modules\processing.py", line 617, in process_images_inner c = get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, p.steps, cached_c) File "C:\stable-diffusion-webui\modules\processing.py", line 571, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps) File "C:\stable-diffusion-webui\modules\prompt_parser.py", line 205, in get_multicond_learned_conditioning learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps) File "C:\stable-diffusion-webui\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper return function(*args, **kwargs, original_function=self.__original_functions[attribute]) File "C:\stable-diffusion-webui\extensions\prompt-fusion-extension\scripts\promptlang.py", line 25, in _hijacked_get_learned_conditioning tensor_builders = _parse_tensor_builders(prompts, total_steps) File "C:\stable-diffusion-webui\extensions\prompt-fusion-extension\scripts\promptlang.py", line 41, in _parse_tensor_builders expr = parse_prompt(prompt) File "C:\stable-diffusion-webui\extensions\prompt-fusion-extension\lib_prompt_fusion\prompt_parser.py", line 130, in parse_prompt return parse_expression(prompt.lstrip()) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\lark.py", line 625, in parse return self.parser.parse(text, start=start, on_error=on_error) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parser_frontends.py", line 96, in parse return self.parser.parse(stream, chosen_start, **kw) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 41, in parse return self.parser.parse(lexer, start) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 171, in parse return self.parse_from_state(parser_state) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 188, in parse_from_state raise e File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 182, in parse_from_state return state.feed_token(end_token, True) File "C:\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 129, in feed_token raise UnexpectedToken(token, expected, state=self, interactive_parser=None) lark.exceptions.UnexpectedToken: Unexpected token Token('$END', '') at line 1, column 222. Expected one of: * LSQB * TEXT * COLON * DOLLAR * LPAR * RPAR

bismark211 avatar Jan 29 '23 23:01 bismark211

There was a rework done recently (only available on the dev branch currently) to implement DDIM, PLMS, and UniPC in a way more consistent across the board. https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/8285a149d8c488ae6c7a566eb85fb5e825145464

Open a new issue if the problem perists.

catboxanon avatar Aug 11 '23 14:08 catboxanon