comfyanonymous
comfyanonymous
Model: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/chrono_edit_14B_fp16.safetensors Workflow:
It is only available in git comfyui, it will be in stable next week.
Should be fixed now.
https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_patcher.py#L258 You can call .state_dict() on the MODEL object.
You can now use --supports-fp8-compute to make that function return True. I'm very curious if using that argument + selecting the fp8_e4m3fn_fast dtype in the Load Diffusion Model node actually...
For people who do get a speedup can you post your full ComfyUI log so I can see which pytorch version, arch, ROCm, etc.. you are using so I can...
https://github.com/comfyanonymous/ComfyUI/commit/97755eed46ccb797cb14a692a4c2931ebf3ad60c This should enable it by default for gfx1201,
Use the regular load lora node.
Have you tried updating the gguf node and comfyui?
Stop using block swap, it's completely useless with native models.