AssertionError: You do not have CLIP state dict!
This is with a freshly downloaded and installed version of forge
When running any model, images generate with no problem. When using the Flux NF4 model, images still also generate no with no issues. However when using the original Dev model, Schnell, or models from Kijai regarding Flux that's NOT NF4, I get the following error:
Traceback (most recent call last): File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\modules\processing.py", line 789, in process_images p.sd_model, just_reloaded = forge_model_reload() File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\modules\sd_models.py", line 501, in forge_model_reload sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\backend\loader.py", line 245, in forge_loader component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd) File "C:\Users\Sam\Documents\AI-Programs\stable-diffusion-webui-forge\backend\loader.py", line 59, in load_huggingface_component assert isinstance(state_dict, dict) and len(state_dict) > 16, 'You do not have CLIP state dict!' AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!
I have a same error:(
same, ive just installed torch21, cuz 231 and 24 getting me to the typeerror "'NoneType' object is not iterable" then i trying to use flux or sdxl, only sd1 worked fine, now i cannot even launch
Same here, but it' s ok with flux1-dev-bnb-nf4.safetensors . Unfortunately LORA are not working with NF4 tensors yet...
same here with the flux1-dev-nf4.safetensors, AssertionError: You do not have CLIP state dict!
Same here 1060 3GB VRAM. Latest Forge UI (version: f2.0.1v1.10.1-previous-260-gaadc0f04, python: 3.10.6, torch: 2.3.1+cu121, xformers: N/A, gradio: 4.40.0).
Surprisingly both flux1-dev-bnb-nf4, flux1-dev-bnb-nf4-v2 and flux1-schnell-bnb-nf4 models work no problem.
The rest of Flux models is either 'AssertionError: You do not have CLIP state dict!' or blue screen and PC resets.
Yeah seems anything outside the Flux NF4 varients will not work. My assumption is there is no UI option (at least for me) in forge to select the CLIP files for Flux
I moved my clip_l.safetensors, and t5xxl_fp16.safetensors to the VAE folder, then selected the three optionsin the VAE/Text Encoder box and got it running. It is sloooooow though. right now it is taking about 3-5 minutes from the time I click Generate to the time it finishes. I will probably play around with the settings and see if I can get it to speed up a little.
I had the same problem with flux.1 model download from Stability Matrix's Model Browser. But I found another version of the flux.1 dev which works for me. That version is linked to this thread: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981 I've just downloaded the flux1-dev-bnb-nf4-v2.safetensors file and placed in StabilityMatrix\Data\Models\StableDiffusion and run with Stable Diffusion WebUI Forge
Awesome! Will try that out and see how it goes. Thanks for the link
I got the same issue when I was using SD3 model in Forge, the other models are all fine including Flux NF4. Loading Model: {'checkpoint_info': {'filename': 'C:\stable-diffusion\A1111UI\models\Stable-diffusion\SD3 Base\sd3_medium.safetensors', 'hash': '9b88d133'}, 'additional_modules': [], 'unet_storage_dtype': None} StateDict Keys: {'unet': 491, 'vae': 244, 'ignore': 0} Traceback (most recent call last): File "C:\stable-diffusion\webui_forge\webui\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "C:\stable-diffusion\webui_forge\webui\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "C:\stable-diffusion\webui_forge\webui\modules\processing.py", line 789, in process_images p.sd_model, just_reloaded = forge_model_reload() File "C:\stable-diffusion\webui_forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\stable-diffusion\webui_forge\webui\modules\sd_models.py", line 501, in forge_model_reload sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts) File "C:\stable-diffusion\webui_forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\stable-diffusion\webui_forge\webui\backend\loader.py", line 245, in forge_loader component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd) File "C:\stable-diffusion\webui_forge\webui\backend\loader.py", line 59, in load_huggingface_component assert isinstance(state_dict, dict) and len(state_dict) > 16, 'You do not have CLIP state dict!' AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!
解决了吗?我也遇到同样的问题
@kenny00968 yes, just use one of these models:
- flux1-dev-bnb-nf4-v2.safetensors Full flux-dev checkpoint with main model in NF4. <- Recommended
- flux1-dev-fp8.safetensors Full flux-dev checkpoint with main model in FP8.
是的,只需使用以下模型之一:
- flux1-dev-bnb-nf4-v2.safetensors 在 NF4 中具有主模型的完整 flux-dev 检查点。<- 推荐
- flux1-dev-fp8.safetensors FP8 中带有主模型的完整 flux-dev 检查点。
有模型,但运行SD3模型或者flux模型就会出现AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!错误,没有图片显示就提示这个错误
是的,只需使用以下模型之一:
- flux1-dev-bnb-nf4-v2.safetensors 在 NF4 中具有主模型的完整 flux-dev 检查点。<- 推荐
- flux1-dev-fp8.safetensors FP8 中带有主模型的完整 flux-dev 检查点。
有模型,但运行SD3模型或者flux模型就会出现AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!错误,没有图片显示就提示这个错误
I was having this issue today. It turns out you can't use the "official" flux model from their huggingface page. You should use one of the mentioned models from lllyasviel's huggingface page
是的,只需使用以下模型之一:
- flux1-dev-bnb-nf4-v2.safetensors 在 NF4 中具有主模型的完整 flux-dev 检查点。<- 推荐
- flux1-dev-fp8.safetensors FP8 中带有主模型的完整 flux-dev 检查点。
有模型,但运行SD3模型或者flux模型就会出现AssertionError: You do not have CLIP state dict!You do not have CLIP state dict!错误,没有图片显示就提示这个错误
我今天遇到了这个问题。事实证明,您不能使用他们 huggingface 页面上的“官方”通量模型。您应该使用 lllyasviel 的 huggingface 页面中提到的模型之一
SD3模型也用不了,你的可以用吗?我把SD3模型的CLIP文件放在Forge版的models也用不了,也是提示同样的错误,其他的模型却可以正常使用
if you use the nf4v2, you don't have to do anything.
if you use the others like the q4_0, in the VAE/Text Encoder, you fill in stuff like clip_l.safetensors ae.safetensors t5xxl_fp8_e4m3fn.safetensors (ae, clip_l and any t5 I guess).
if that dropdown have no such option, you download them and put it in the clip and vae folders first
if you use the nf4v2, you don't have to do anything.
if you use the others like the q4_0, in the VAE/Text Encoder, you fill in stuff like clip_l.safetensors ae.safetensors t5xxl_fp8_e4m3fn.safetensors (ae, clip_l and any t5 I guess).
if that dropdown have no such option, you download them and put it in the clip and vae folders first
I'm assuming v2 has all those already baked in then
that error apears only if you use GGUF model
watch here to solve https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
but its not faster and dont support lora at the moment
我找到答案了,是因为“官方”的FLUX模型和webui forge版作者的FLUX模型不一样(也有可能不支持“官方”的模型,我试过把SD3模型和所需文件放进去也会这样报错),所以,当你使用“官方”版的FLUX模型就会报错,才会提示“AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!”这样的错误,希望能帮到你~
Wait so, you can only use an NF4 model but you can't use Lora with it?
Same here 1060 3GB VRAM. ... or blue screen and PC resets.
Even the fp8 variants use way more memory. Got 32gb ram and 12gb vram and both is maxed out and i am not sure how much data is shifted to ssd page file. No surprise you PC gives up trying to load these.
I got that error until I figured out how to add all the extra files. If you're using the official models, you also need to have their vae called ae.safetensors in your models/vae folder, and their text encoders clip_l.safetensors, t5xxl_fp8_e4m3fn.safetensors, and t5xxl_fp16.safetensors in your models/text_encoders folder. Then you select all of them in the vae/text encoder drop down in forge. I've used both the official fp8 and fp32 on my 4090 with great results. The fp8 version is really no slower than SDXL, but the fp32 version runs out of gpu memory and does a little song and dance with it that takes a couple minutes - but on the plus side it still finishes.
Came here from a google search with same issues Thanks @Nabby109 for this explanation I don't know why the Blackforest HF page doesn't tell anyone this!!
@kenny00968 yes, just use one of these models:
* [flux1-dev-bnb-nf4-v2.safetensors](https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors) Full flux-dev checkpoint with main model in NF4. <- Recommended * [flux1-dev-fp8.safetensors](https://huggingface.co/lllyasviel/flux1_dev/blob/main/flux1-dev-fp8.safetensors) Full flux-dev checkpoint with main model in FP8.
hello why you recommend the nf4? also which other plugins and loras you suggest to use with the nf4 version? thanks
@kenny00968 yes, just use one of these models:
* [flux1-dev-bnb-nf4-v2.safetensors](https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors) Full flux-dev checkpoint with main model in NF4. <- Recommended * [flux1-dev-fp8.safetensors](https://huggingface.co/lllyasviel/flux1_dev/blob/main/flux1-dev-fp8.safetensors) Full flux-dev checkpoint with main model in FP8.hello why you recommend the nf4? also which other plugins and loras you suggest to use with the nf4 version? thanks
NF4 is the next generation of the Flux model. It is engineered to be faster and more accurate than the initial version.
All you need to know, files to download and path to save to is here https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
I moved my clip_l.safetensors, and t5xxl_fp16.safetensors to the VAE folder, then selected the three optionsin the VAE/Text Encoder box and got it running. It is sloooooow though. right now it is taking about 3-5 minutes from the time I click Generate to the time it finishes. I will probably play around with the settings and see if I can get it to speed up a little.
You can install xformers if you don't and as I read, some users reported faster rendering if you set Swap method to Async and Swap Location to shared. But when I tried it, I got pc crash
Wait so, you can only use an NF4 model but you can't use Lora with it?
You can use Lora. You need to change Diffusion with Low Bits to Automatic (fp16 LoRA)
I moved my clip_l.safetensors, and t5xxl_fp16.safetensors to the VAE folder, then selected the three optionsin the VAE/Text Encoder box and got it running. It is sloooooow though. right now it is taking about 3-5 minutes from the time I click Generate to the time it finishes. I will probably play around with the settings and see if I can get it to speed up a little.
clip_l and t5 go to models/text_encoder, not in models/VAE
@kenny00968yes, just use one of these models:
* [flux1-dev-bnb-nf4-v2.safetensors](https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors) Full flux-dev checkpoint with main model in NF4. <- Recommended * [flux1-dev-fp8.safetensors](https://huggingface.co/lllyasviel/flux1_dev/blob/main/flux1-dev-fp8.safetensors) Full flux-dev checkpoint with main model in FP8.hello why you recommend the nf4? also which other plugins and loras you suggest to use with the nf4 version? thanks
I am now personally using and recommending the Q8 GGUF version as well as the experimental Quantized version of the T5 encoder. From what I have seen it performs better than the NF4.
@kenny00968yes, just use one of these models:
* [flux1-dev-bnb-nf4-v2.safetensors](https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors) Full flux-dev checkpoint with main model in NF4. <- Recommended * [flux1-dev-fp8.safetensors](https://huggingface.co/lllyasviel/flux1_dev/blob/main/flux1-dev-fp8.safetensors) Full flux-dev checkpoint with main model in FP8.hello why you recommend the nf4? also which other plugins and loras you suggest to use with the nf4 version? thanks
I am now personally using and recommending the Q8 GGUF version as well as the experimental Quantized version of the T5 encoder. From what I have seen it performs better than the NF4.
someone can confirm this?
