ComfyUI_Patches_ll
ComfyUI_Patches_ll copied to clipboard
### Summary Allows to use Flux ControlNet with Flux Kontext ReferenceLatent --- ### Description Currently there's a shape mismatch in the **[forward_orig](https://github.com/comfyanonymous/ComfyUI/blob/03895dea7c4a6cc93fa362cd11ca450217d74b13/comfy/ldm/flux/model.py#L160)** method due to conditioning concatentation for Flux Kontext....
please support Nvidia new model "Cosmos-Predict2 14b and 2B Text2Image model
https://github.com/HiDream-ai/HiDream-I1
Does TeaCache and FirstBlockCache support applying on GGUF(https://github.com/city96/ComfyUI-GGUF) model with a little image quality drop? If yes, could anyone provide a comfyui workflow example ?
WAN 2.1 I2V/T2V. I tested the rel L1 thresh values of 0.12, 0.25, and 0.3, but none of them could accelerate the process. However, setting FirstBlockCache to 0.1 can double...
please support Wan2.1 video model..thanks

LTX I2V
It doesn't work in LTX I2V?, or I am doing something wrong..
會支持其他的嗎?
https://github.com/Isi-dev/ComfyUI-UniAnimate-W 比如這個跳舞的或其他的舞蹈插件,這些舞蹈才幾秒鐘就要花上半小時以上很恐怖 如果TeaCache与WaveSpeed支持ˊ的話應該會更好
ComfyUI_PuLID_Flux_ll工作流里面用这个节点不起作用,比不用慢五倍有时候还卡死