ComfyUI
ComfyUI copied to clipboard
--fast issues with Flux
Expected Behavior
Performance boost without errors
Actual Behavior
- In multi-gpu configuration
supports_fp8_compute
returnsFalse
, optimizations not applied, however main GPU actually support it (I have 4090 and 3090) - After force
True
it works fine, but with loras terminal displays errors in single_blocks from 30 to 37 at each sampling step. Loras themselves work correctly.
Steps to Reproduce
Debug Logs
ERROR lora diffusion_model.single_blocks.30.linear1.weight "addmm_cuda" not implemented for 'Float8_e4m3fn'
Other
No response