ComfyUI
ComfyUI copied to clipboard
Im getting this error clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
i dont know if this affects anything when i generate i get this clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Loading 1 new model C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
I'm getting the same errors
I have the same error when trying to merge models on comfy, using ModelMergeSimple and CheckpointSave.
[...]
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.transformer.text_projection.weight']
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
[...]
The model seems to successfully merge and save, it is even able to generate images correctly in the same workflow. But when inspecting the resulting model, using the stable-diffusion-webui-model-toolkit extension, it reports unet and vae being broken and the clip as junk (doesn't recognize it). I noticed model merge was broken because I couldn't use the resulting model to train a LECO with p1atdev scripts anymore:
Traceback (most recent call last):
File "D:\SDTraining\LECO\train_lora.py", line 343, in <module>
main(args)
File "D:\SDTraining\LECO\train_lora.py", line 330, in main
train(config, prompts)
File "D:\SDTraining\LECO\train_lora.py", line 57, in train
tokenizer, text_encoder, unet, noise_scheduler = model_util.load_models(
File "D:\SDTraining\LECO\model_util.py", line 114, in load_models
tokenizer, text_encoder, unet = load_checkpoint_model(
File "D:\SDTraining\LECO\model_util.py", line 83, in load_checkpoint_model
pipe = StableDiffusionPipeline.from_single_file(
File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\loaders.py", line 1922, in from_single_file
pipe = download_from_original_stable_diffusion_ckpt(
File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1534, in download_from_original_stable_diffusion_ckpt
text_model = convert_ldm_clip_checkpoint(
File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 802, in convert_ldm_clip_checkpoint
set_module_tensor_to_device(text_model, param_name, "cpu", value=param)
File "D:\SDTraining\LECO\venv\lib\site-packages\accelerate\utils\modeling.py", line 265, in set_module_tensor_to_device
new_module = getattr(module, split)
File "D:\SDTraining\LECO\venv\lib\site-packages\torch\nn\modules\module.py", line 1688, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'CLIPTextModel' object has no attribute 'text_projection'
it only really happend after updating comfy, i did a fresh install and it was fine before updating i however did not try it before reinstalling the nodes so im not sure if it would be a custom node causing this
Yes, all this problem happens after updating comfyui still can't use model merge and muti lora get bad generation and sometime noised image.
I have the same problem: clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
same here. everything worked begore update
I have the same issue... clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] I get a black (blank) image at the end of the render
i dont know if this affects anything when i generate i get this clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Loading 1 new model C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
I had asame error s using portable Comfyui with ipadapter-plus workflow. The issue is related to the two clipvision models for ipadapter-Plus. The two models have the same name "model.safetensor". I put them in seperate folders under another UI /model/clip-vision. Still does not work. I have to put the two folders in comfyui/model/clip-vision folder then the errors are gone. One of my folder name sdxl something. The other one sd1.5.
我不知道这是否会影响我生成时的任何内容,我缺少这个剪辑: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] 正在加载 1 个新模型 C:\Users\heruv\ComfyUI \comfy\ldm\ module\attention.py:345: UserWarning: 1Torch 未使用 flash 注意进行编译。 (在 ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263 内部触发。) out = torch.nn.function.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=假)
我在使用便携式 Comfyui 和 ipadapter-plus 工作流程时遇到了同样的错误。该问题与 ipadapter-Plus 的两个 Clipvision 模型有关。这两个模型具有相同的名称“model.safetensor”。我将它们放在另一个 UI /model/clip-vision 下的单独一个文件夹中。还是不行。我将这两个文件夹放在 comfyui/model/clip-vision 文件夹中,然后错误就消失了。我的文件夹其中一个名称是 sdxl 之类的。另一台是 sd1.5。
我并没有使用IPadapter,只是刚刚启动comfyui,使用默认工作流生成了一张图,就出现如下报错了:clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
I got the same message but the output seems fine.
I have seen this issue in a discussion group, and the result of their discussion was that some parameter names were mistakenly changed during the last update.
I have the same issue.
got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
0%| | 0/20 [00:00<?, ?it/s]terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f245b4ced87 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f245b47f75f in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f245b59f8a8 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3:
i dont know if this affects anything when i generate i get this clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Loading 1 new model C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
I had asame error s using portable Comfyui with ipadapter-plus workflow. The issue is related to the two clipvision models for ipadapter-Plus. The two models have the same name "model.safetensor". I put them in seperate folders under another UI /model/clip-vision. Still does not work. I have to put the two folders in comfyui/model/clip-vision folder then the errors are gone. One of my folder name sdxl something. The other one sd1.5.
I am doing a clean install and after doing install the pytorch modules and then the requirements.txt, then installing my old models in the model directory and doing a quick generation to make sure it was working on the default workflow I installed Comfy Manager and started installing a bunch of my old custom_nodes one by one ... I was lloking for errors on installing and restarting, but I didn't pay attention to any warnings or errors in the image generation part until I had about 10 or 15 nodes installed when I noticed the same issue in my output at the fresh load of any model (after the initial load everything is fine, no warning.)
I looked at my old clip_vision directory and it has the models in separate doectories as well, so I copied them over (SDXL and SD1.5) refreshed, restarted, but the same warning/error message is still there.
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|██████████| 20/20 [00:00<00:00, 22.98it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 2.76 seconds
Not sure if that means anything, but I just thought I'd mention that your solution didn't solve my problem.
I will be doing another clean install later this week, or next week (testing out the Fedora Silverblue "immutable" system, to see if it is viable (which it seems to be, except maybe for DaVinci Resolve Studio, which seems to work until you try to do anything, then it can't seem to access the memory it recognizes in its own configuration (my guess is that this is the Studio version messing up on the licensing, able to activate using their license server, but unable to fully initialize the GPU access while in a container environment ... ) and will try to pay better attention for when this error starts to show up.
it only really happend after updating comfy, i did a fresh install and it was fine before updating i however did not try it before reinstalling the nodes so im not sure if it would be a custom node causing this
I can confirm, that after a fresh install, the only thing added are models in models/checlpoints directory to let it generate anything ... the same message is still there.
Note: 'clip missing' message is only there when I first load a model (first run, or changing to a new model) and once the model is loaded it is silent until I use a new model.
DOH!!!! Okay, since we have the code and lots of documentation I took a quick look:
Short answer: it's a logging message, so it can be ignored unless you don't believe me (and why should you?) or if you do believe me and are still curious!
Longer Answer: Comfy docs says it can auto config the model when you load it, and this message seems to come from part of that process (the load_checkpoint_guess_config()) so it is doing some kind of comparison between the model and it's 'database' of models, probably with the (partial?) purpose of doing the auto config.
I didn't look at the code in detail, but my guess is either this is a notice of the parameters that the model being loaded doesn't have implemented, or has no specific definition set for it in ComfyUI --- OR tihe model has those parameters, but ComfyUI doesn't handle them?
So it just logs the information and we can make of it what we will, although it would be nice to know what we should/can do with that info?
Does it mean those parameters are:
- missing from the model and we can't do anything?
- missing from ComfUI support and we still can't do anything?
- not set to any 'model default' and Comfy sets it's own default?
- not set by ComfyUI and uses the 'model default' if any.
- something else?
If you have some coding skills you can look in that class/function I mentioned (it's in sd.py) and follow along to see if what you can learn about this that might be useful in your day to day understanding of how SD models work.
Just doing a fresh install of my workstation and VSCode is not setup as yet (and I don't remember all the git commands) so after I finish that I may just start looking at the code ... seems to be a potential source of some serious 'understanding' that might come in handy later ;-)
I have this message - but does not stop successful output!
I also have this problem, but I still get an image. What can I do to fix it PLS. got prompt [rgthree] Using rgthree's optimized recursive execution. Prompt executor has been patched by Job Iterator! model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Requested to load SDXLClipModel
I'm getting this message too, but if I use the correct VAE things work as normal.
so, no one knows why or where it's coming from?
here is the solution: edit comfy\supported_models.py
and make the pop_keys list empty:
def process_clip_state_dict_for_saving(self, state_dict):
# pop_keys = ["clip_l.transformer.text_projection.weight", "clip_l.logit_scale"]
pop_keys = []
for p in pop_keys:
if p in state_dict:
state_dict.pop(p)
you're welcome.
Seems this was fixed in https://github.com/comfyanonymous/ComfyUI/commit/93e876a3bed0cff640ba922f36786957ed68ef6e
Seems this was fixed in 93e876a
Nope it does not seem like that: (today)
Loading: ComfyUI-Manager (V2.30)
ComfyUI Revision: 2167 [cd07340d] | Released on '2024-05-08'
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Nope it does not seem like that: (today)
93e876a is next commit of yours (cd07340). You need to update again.