Could someone please help?
Your question
Could someone please help? After I git cloned kijai's ComfyUI-FramePackWrapper in custom_nodes, I encountered an error when launching the workflow: "clip missing: ['text_projection.weight']". Additionally, when I upload images with descriptions, the generated video doesn't match my expectations.
My environment:
Ubuntu 22.04
Python 3.10.6
PyTorch version: 2.6.0+cu124
Model directory setup (referenced from GitHub):
/apps/ComfyUI/models/clip_vision: Contains sigclip_vision_patch14_384.safetensors
/apps/ComfyUI/models/text_encoders: Contains llava_llama3_fp16.safetensors, llava_llama3_fp8_scaled.safetensors, and clip_l.safetensors
/apps/ComfyUI/models/vae: Contains hunyuan_video_vae_bf16.safetensors
/apps/ComfyUI/models/diffusers/lllyasviel/FramePackI2V_HY/: Contains 3 diffusion_pytorch_model-0000x-of-00003.safetensors files, diffusion_pytorch_model.safetensors.index.json, and config.json.
Workflow file: framepack_hv_example.json.
Logs
Other
No response
help me,please
You can consult with ChatGPT -- he is strong in this things. I've use this one (free, no registration) - https://lmarena.ai/ When loading, press OK for agreeing terms of service, in bottom (near clip) -- ask what you want (on your native language). ChatGPT did answer!
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.