forge does not work properly after the update
AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!
flux cannot be used only after it is updated. It can be seen here that other people also encounter this problem and there are some possible methods, such as moving CLIP to VAE folder, but it is not clear that this is an effective reply from the author. I would like to know how to solve it.
Now my flux is completely unusable for forge. Request guidance After I tried to move clip to vae: Model selected: {'checkpoint_info': {'filename': 'D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors', 'hash': '26acbda5'}, 'additional_modules': ['D:\AI\webui_forge\webui\models\VAE\clip\clip_l.safetensors', 'D:\AI\webui_forge\webui\models\VAE\clip\t5xxl_fp8_e4m3fn.safetensors', 'D:\AI\webui_forge\webui\models\VAE\sd3VAE_v10.safetensors'], 'unet_storage_dtype': 'nf4'} Model selected: {'checkpoint_info': {'filename': 'D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors', 'hash': '26acbda5'}, 'additional_modules': ['D:\AI\webui_forge\webui\models\VAE\clip\clip_l.safetensors', 'D:\AI\webui_forge\webui\models\VAE\clip\t5xxl_fp8_e4m3fn.safetensors', 'D:\AI\webui_forge\webui\models\VAE\sd3VAE_v10.safetensors'], 'unet_storage_dtype': torch.float8_e4m3fn} Loading Model: {'checkpoint_info': {'filename': 'D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors', 'hash': '26acbda5'}, 'additional_modules': ['D:\AI\webui_forge\webui\models\VAE\clip\clip_l.safetensors', 'D:\AI\webui_forge\webui\models\VAE\clip\t5xxl_fp8_e4m3fn.safetensors', 'D:\AI\webui_forge\webui\models\VAE\sd3VAE_v10.safetensors'], 'unet_storage_dtype': torch.float8_e4m3fn} StateDict Keys: {'transformer': 780, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 220, 'ignore': 0} Using Detected T5 Data Type: torch.float8_e4m3fn Working with z of shape (1, 16, 32, 32) = 16384 dimensions. K-Model Created: {'storage_dtype': torch.float8_e4m3fn, 'computation_dtype': torch.bfloat16} Calculating sha256 for D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors: 275ef623d3bbccddb75b66fb549a7878da78e3a201374b73cee76981cb84551c Model loaded in 3.5s (unload existing model: 3.0s, forge model load: 0.5s). Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored. To load target model JointTextEncoder Begin to load 1 model [Memory Management] Current Free GPU Memory: 15136.16 MB [Memory Management] Required Model Memory: 5153.49 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 8958.67 MB Moving model(s) has taken 2.11 seconds Traceback (most recent call last): File "D:\AI\webui_forge\webui\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "D:\AI\webui_forge\webui\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "D:\AI\webui_forge\webui\modules\processing.py", line 809, in process_images res = process_images_inner(p) File "D:\AI\webui_forge\webui\modules\processing.py", line 922, in process_images_inner p.setup_conds() File "D:\AI\webui_forge\webui\modules\processing.py", line 1507, in setup_conds super().setup_conds() File "D:\AI\webui_forge\webui\modules\processing.py", line 494, in setup_conds self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data) File "D:\AI\webui_forge\webui\modules\processing.py", line 463, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "D:\AI\webui_forge\webui\modules\prompt_parser.py", line 262, in get_multicond_learned_conditioning learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling) File "D:\AI\webui_forge\webui\modules\prompt_parser.py", line 189, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\diffusion_engine\flux.py", line 79, in get_learned_conditioning cond_t5 = self.text_processing_engine_t5(prompt) File "D:\AI\webui_forge\webui\backend\text_processing\t5_engine.py", line 123, in call z = self.process_tokens([tokens], [multipliers])[0] File "D:\AI\webui_forge\webui\backend\text_processing\t5_engine.py", line 134, in process_tokens z = self.encode_with_transformers(tokens) File "D:\AI\webui_forge\webui\backend\text_processing\t5_engine.py", line 60, in encode_with_transformers z = self.text_encoder( File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\nn\t5.py", line 205, in forward return self.encoder(x, *args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\nn\t5.py", line 186, in forward x, past_bias = l(x, mask, past_bias) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\nn\t5.py", line 162, in forward x, past_bias = self.layer[0](x, mask, past_bias) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\nn\t5.py", line 149, in forward output, past_bias = self.SelfAttention(self.layer_norm(x), mask=mask, past_bias=past_bias) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\AI\webui_forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\AI\webui_forge\webui\backend\nn\t5.py", line 138, in forward out = attention_function(q, k * ((k.shape[-1] / self.num_heads) ** 0.5), v, self.num_heads, mask) File "D:\AI\webui_forge\webui\backend\attention.py", line 314, in attention_xformers mask_out[:, :, :mask.shape[-1]] = mask RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256] The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0. Target sizes: [1, 256, 256]. Tensor sizes: [64, 256, 256]
update and try again
我也遇到这样的问题,我的SD-Webui Forge版本上运行SD3模型和FLUX模型都显示没有clip字典是怎么回事呀?在A1111原版的Webui中能正常使用SD3模型
更新并重试 nf4可以正常运行,fp8直接崩溃退出: Model selected: {'checkpoint_info': {'filename': 'D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors', 'hash': '26acbda5'}, 'additional_modules': ['D:\AI\webui_forge\webui\models\VAE\clip\t5xxl_fp16.safetensors', 'D:\AI\webui_forge\webui\models\VAE\clip\clip_l.safetensors', 'D:\AI\webui_forge\webui\models\VAE\sd3VAE_v10.safetensors'], 'unet_storage_dtype': torch.float8_e4m3fn} Loading Model: {'checkpoint_info': {'filename': 'D:\AI\webui_forge\webui\models\Stable-diffusion\flux\flux1-dev-fp8.safetensors', 'hash': '26acbda5'}, 'additional_modules': ['D:\AI\webui_forge\webui\models\VAE\clip\t5xxl_fp16.safetensors', 'D:\AI\webui_forge\webui\models\VAE\clip\clip_l.safetensors', 'D:\AI\webui_forge\webui\models\VAE\sd3VAE_v10.safetensors'], 'unet_storage_dtype': torch.float8_e4m3fn} StateDict Keys: {'transformer': 780, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 220, 'ignore': 0} Using Default T5 Data Type: torch.float16 Working with z of shape (1, 16, 32, 32) = 16384 dimensions. K-Model Created: {'storage_dtype': torch.float8_e4m3fn, 'computation_dtype': torch.bfloat16} Model loaded in 0.7s (forge model load: 0.7s). Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored. To load target model JointTextEncoder Begin to load 1 model [Memory Management] Current Free GPU Memory: 15213.00 MB [Memory Management] Required Model Memory: 9569.49 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 4619.51 MB 请按任意键继续. . .
看到又有新的更新了,fp8开始推理没有报错了,VAE / Text Encoder中选取的clip和vae,Euler20步,但出来的图像模糊就像采样器不支持一样,换nf4完全正常,