stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

Prompts longer than 255 tokens causes error with Flux

Open Jonseed opened this issue 1 year ago • 0 comments

Traceback (most recent call last):
  File "d:\repos\stable-diffusion-webui\modules_forge\main_thread.py", line 30, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\repos\stable-diffusion-webui\modules\txt2img.py", line 123, in txt2img_function
    processed = processing.process_images(p)
  File "D:\repos\stable-diffusion-webui\modules\processing.py", line 817, in process_images
    res = process_images_inner(p)
  File "D:\repos\stable-diffusion-webui\modules\processing.py", line 930, in process_images_inner
    p.setup_conds()
  File "D:\repos\stable-diffusion-webui\modules\processing.py", line 1526, in setup_conds
    super().setup_conds()
  File "D:\repos\stable-diffusion-webui\modules\processing.py", line 502, in setup_conds
    self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data)
  File "D:\repos\stable-diffusion-webui\modules\processing.py", line 471, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
  File "D:\repos\stable-diffusion-webui\modules\prompt_parser.py", line 262, in get_multicond_learned_conditioning
    learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
  File "D:\repos\stable-diffusion-webui\modules\prompt_parser.py", line 189, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "d:\repos\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\repos\stable-diffusion-webui\backend\diffusion_engine\flux.py", line 79, in get_learned_conditioning
    cond_t5 = self.text_processing_engine_t5(prompt)
  File "D:\repos\stable-diffusion-webui\backend\text_processing\t5_engine.py", line 129, in __call__
    return torch.stack(zs)
RuntimeError: stack expects each tensor to be equal size, but got [264, 4096] at entry 0 and [275, 4096] at entry 1
stack expects each tensor to be equal size, but got [264, 4096] at entry 0 and [275, 4096] at entry 1

Reducing the length of the prompt to less than 255 tokens fixes it.

Jonseed avatar Oct 05 '24 20:10 Jonseed