StableCascade icon indicating copy to clipboard operation
StableCascade copied to clipboard

clip missing: ['clip_g.logit_scale'] Missing VAE keys ['encoder.mean', 'encoder.std'] clip missing: ['clip_g.logit_scale']

Open kenic123 opened this issue 1 year ago • 4 comments

能运行,但是在加载模型的时候报错,这是为什么呢? It works, but an error occurs when loading the model. Why?

Starting server

To see the GUI go to: http://127.0.0.1:8188 got prompt model_type STABLE_CASCADE adm 0 clip missing: ['clip_g.logit_scale'] model_type STABLE_CASCADE adm 0 Missing VAE keys ['encoder.mean', 'encoder.std'] clip missing: ['clip_g.logit_scale'] left over keys: dict_keys(['clip_l_vision.vision_model.embeddings.class_embedding', 'clip_l_vision.vision_model.embeddings.patch_embedding.weight', 'clip_l_vision.vision_model.embeddings.position_embedding.weight', 'clip_l_vision.vision_model.embeddings.position_ids', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.10.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.10.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.10.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.10.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.10.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.10.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.10.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.10.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.10.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.10.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.10.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.10.self_attn.out_proj.weight',

kenic123 avatar Feb 21 '24 03:02 kenic123

I'm seeing the same issue. Rebooted, ran updates. Running on a Macintosh.

DavidEBell avatar Feb 21 '24 15:02 DavidEBell

I have something similar.

model_type STABLE_CASCADE adm 0 Missing VAE keys ['encoder.mean', 'encoder.std'] clip missing: ['clip_g.logit_scale'] left over keys: dict_keys(['clip_l_vision.vision_model.embeddings.class_embedding', 'clip_l_vision.vision_model.embeddings.patch_embedding.weight', 'clip_l_vision.vision_model.embeddings.position_embedding.weight', 'clip_l_vision.vision_model.embeddings.position_ids', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.0.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.0.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.0.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.1.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.1.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.1.self_attn.v_proj.weight',

(...)

'clip_l_vision.vision_model.encoder.layers.6.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.6.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.7.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.7.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.7.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.7.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.7.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.7.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.7.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.7.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.7.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.8.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.8.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.8.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.8.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.8.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.8.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.8.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.8.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.8.self_attn.v_proj.weight', 'clip_l_vision.vision_model.encoder.layers.9.layer_norm1.bias', 'clip_l_vision.vision_model.encoder.layers.9.layer_norm1.weight', 'clip_l_vision.vision_model.encoder.layers.9.layer_norm2.bias', 'clip_l_vision.vision_model.encoder.layers.9.layer_norm2.weight', 'clip_l_vision.vision_model.encoder.layers.9.mlp.fc1.bias', 'clip_l_vision.vision_model.encoder.layers.9.mlp.fc1.weight', 'clip_l_vision.vision_model.encoder.layers.9.mlp.fc2.bias', 'clip_l_vision.vision_model.encoder.layers.9.mlp.fc2.weight', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.k_proj.bias', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.k_proj.weight', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.out_proj.bias', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.out_proj.weight', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.q_proj.bias', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.q_proj.weight', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.v_proj.bias', 'clip_l_vision.vision_model.encoder.layers.9.self_attn.v_proj.weight', 'clip_l_vision.vision_model.post_layernorm.bias', 'clip_l_vision.vision_model.post_layernorm.weight', 'clip_l_vision.vision_model.pre_layrnorm.bias', 'clip_l_vision.vision_model.pre_layrnorm.weight', 'clip_l_vision.visual_projection.weight']) model_type STABLE_CASCADE adm 0 clip missing: ['clip_g.logit_scale'] Requested to load StableCascadeClipModel Loading 1 new model Requested to load StableCascade_C

chabonmental avatar Feb 24 '24 00:02 chabonmental

Did they correct it? If yes, how, same problem, please...

bkdsb avatar Mar 03 '24 22:03 bkdsb

To see the GUI go to: http://127.0.0.1:8188 FETCH DATA from: C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json got prompt '🔥 - 57 Nodes not included in prompt but is activated' model_type STABLE_CASCADE adm 0 clip missing: ['clip_g.logit_scale'] model_type STABLE_CASCADE adm 0 clip missing: ['clip_g.logit_scale'] Requested to load StableCascadeClipModel Loading 1 new model ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Belegante\Desktop\ComfyUI_windows_portable\ComfyUI\nodes.py", line 904, in encode output = clip_vision.encode_image(image) ^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'encode_image'

Prompt executed in 17.34 seconds

bkdsb avatar Mar 03 '24 23:03 bkdsb