### Error loading model: missing text_projection.weight
Your question
Environment:
- OS: Windows 10
- Python version: 3.10.8
- PyTorch version: 2.0.1+cu117
- ComfyUI version: [your version]
Description:
I encountered an error when trying to load the model in ComfyUI. The error message states that the text_projection.weight is missing.
Steps to Reproduce:
- Download the following model files and place them in
G:\ComfyUI_windows_portable\ComfyUI\models\clip:pytorch_model.binconfig.jsontokenizer.json
- Start ComfyUI.
- Attempt to load the model.
Expected result: The model should load without errors.
Actual result:
An error is thrown indicating that text_projection.weight is missing.
Additional information:
- I have checked the files and they seem to be complete.
- The issue also appears after updating PyTorch.
Logs
G:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-10-12 16:29:23.932413
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: G:\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: G:\ComfyUI_windows_portable\ComfyUI
** Log path: G:\ComfyUI_windows_portable\comfyui.log
Prestartup times for custom nodes:
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
0.6 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Total VRAM 24564 MB, total RAM 65349 MB
pytorch version: 2.4.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: G:\ComfyUI_windows_portable\ComfyUI\web
G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
### Loading: ComfyUI-Impact-Pack (V7.5.2)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.7)
[Impact Pack] Wildcards loading done.
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2754 [1b808952] *DETACHED | Released on '2024-10-10'
[rgthree] Loaded 42 fantastic nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.
Import times for custom nodes:
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-portrait-master-zh-cn
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials
0.2 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
0.5 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
Starting server
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
G:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
Requested to load Flux
Loading 1 new model
loaded completely 0.0 11350.048889160156 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00, 1.52it/s]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
G:\ComfyUI_windows_portable\ComfyUI\nodes.py:1506: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Prompt executed in 25.75 seconds
Other
No response
Did you find answer?
I have same issue with Flux and SD3.5 but it doesn't seem to affect output?
你找到答案了吗?
no
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.
I have the same issue.
no answer is found...
No action needed. The message is just informational.
No action needed. The message is just informational.
ok, thats can be ignored.
No action needed. The message is just informational.
How can it be "just informational"? If the weights are missing, the CLIP-textencoder will not work properly.
No action needed. The message is just informational.
How can it be "just informational"? If the weights are missing, the CLIP-textencoder will not work properly.
text_projection is used in the CLIP model and does not exist structurally in T5 text encoder.
FLUX uses both CLIP and T5, so the message appears during the process of loading T5.
FLUX uses both CLIP and T5, so the message appears during the process of loading T5.
Exactly, it uses both and CLIP will fail due to the missing weights. It's an issue with the CLIP-L textencoder. If replaced with a another one like ViT-L/14 there is no error message.
FLUX uses both CLIP and T5, so the message appears during the process of loading T5.
Exactly, it uses both and CLIP will fail due to the missing weights. It's an issue with the CLIP-L textencoder. If replaced with a another one like ViT-L/14 there is no error message.
There was a part I had misunderstood.
text_projection.weight is a key associated with the projected pooled output, and many diffusion models do not use this.
For this reason, some diffusion models exclude unnecessary keys from the text encoder model when releasing their weights.
This is why text_projection.weight is not included in some versions of CLIP-L.