stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Resolved]There is no prompt "NoneType' object is not iterable", but no image is generated.
Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
Version: f2.0.1v1.10.1-previous-260-gaadc0f04
Commit hash: aadc0f04c48eb19475752a4206420ea2004e2f42
Launching Web UI with arguments: --port=6006 --xformers --theme=dark --enable-insecure-extension-access --no-download-sd-model
Total VRAM 11004 MB, total RAM 384809 MB
pytorch version: 2.3.1+cu118
xformers version: 0.0.27+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: /root/autodl-tmp/stable-diffusion-webui-forge/models/ControlNetPreprocessor
2024-08-14 18:01:48,901 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/root/autodl-tmp/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-fp8.safetensors', 'hash': 'be9881f4'}, 'additional_modules': [], 'unet_storage_dtype': None}
Running on local URL: http://127.0.0.1:6006/
To create a public link, set `share=True` in `launch()`.
Startup time: 15.7s (prepare environment: 2.1s, launcher: 2.7s, import torch: 3.4s, initialize shared: 0.2s, other imports: 1.0s, load scripts: 1.8s, create ui: 2.6s, gradio launch: 1.9s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Model selected: {'checkpoint_info': {'filename': '/root/autodl-tmp/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': None}
Environment vars changed: {'stream': True, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Loading Model: {'checkpoint_info': {'filename': '/root/autodl-tmp/stable-diffusion-webui-forge/models/Stable-diffusion/flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': None}
StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.float32}
Model loaded in 0.8s (unload existing model: 0.3s, forge model load: 0.6s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model JointTextEncoder
Begin to load 1 model
[Memory Management] Current Free GPU Memory: 10827.56 MB
[Memory Management] Required Model Memory: 5154.62 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4648.94 MB
Moving model(s) has taken 1.54 seconds
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 30, in work
self.result = self.func(*self.args, **self.kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/txt2img.py", line 110, in txt2img_function
processed = processing.process_images(p)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 809, in process_images
res = process_images_inner(p)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 922, in process_images_inner
p.setup_conds()
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 1507, in setup_conds
super().setup_conds()
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 494, in setup_conds
self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/processing.py", line 463, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/prompt_parser.py", line 262, in get_multicond_learned_conditioning
learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
File "/root/autodl-tmp/stable-diffusion-webui-forge/modules/prompt_parser.py", line 189, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/diffusion_engine/flux.py", line 79, in get_learned_conditioning
cond_t5 = self.text_processing_engine_t5(prompt)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 123, in __call__
z = self.process_tokens([tokens], [multipliers])[0]
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 134, in process_tokens
z = self.encode_with_transformers(tokens)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/text_processing/t5_engine.py", line 60, in encode_with_transformers
z = self.text_encoder(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 205, in forward
return self.encoder(x, *args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 186, in forward
x, past_bias = l(x, mask, past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 162, in forward
x, past_bias = self.layer[0](x, mask, past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 149, in forward
output, past_bias = self.SelfAttention(self.layer_norm(x), mask=mask, past_bias=past_bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/nn/t5.py", line 138, in forward
out = attention_function(q, k * ((k.shape[-1] / self.num_heads) ** 0.5), v, self.num_heads, mask)
File "/root/autodl-tmp/stable-diffusion-webui-forge/backend/attention.py", line 314, in attention_xformers
mask_out[:, :, :mask.shape[-1]] = mask
RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256]
The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256]
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model JointTextEncoder
After the upgrade, this problem has been solved. Thanks to the forge team for their unremitting efforts.
Please close the issue.