ComfyUI
ComfyUI copied to clipboard
Slow Inference with Flux Model on ComfyUI using PyTorch 2.4.0
Your question
Environment
- ComfyUI: Latest version (installed via official website instructions)
- GPU: NVIDIA RTX 4080 (12GB VRAM)
- RAM: 32GB
- CUDA: 12.1
- PyTorch: 2.4.0 (issue occurs) / 2.3.1 (better performance)
Issue Description
I've recently installed the latest version of ComfyUI following the official website instructions. I'm experiencing significant slowdowns when using Flux models with PyTorch 2.4.0, but the performance improves dramatically when switching to PyTorch 2.3.1.
Observed Behavior
- With PyTorch 2.4.0: Extremely slow inference (approximately 14-15 seconds per iteration)
- With PyTorch 2.3.1: Much faster inference (approximately 1.5-2 seconds per iteration)
Expected Behavior
I would expect the Flux models to run at similar speeds across different PyTorch versions, or at least not have such a significant performance difference.
Additional Information
- The installation process was smooth and followed the official guidelines.
- No error messages are displayed; the issue is purely related to performance.
- This slowdown is specific to Flux models when using PyTorch 2.4.0.
- Switching to PyTorch 2.3.1 resolves the performance issue, but I'd prefer to use the latest PyTorch version if possible.
Questions
- Is this a known issue with Flux models on ComfyUI when using PyTorch 2.4.0?
- What could be causing such a significant performance difference between PyTorch 2.4.0 and 2.3.1 for Flux models?
- Are there any workarounds or optimizations to improve Flux model performance with PyTorch 2.4.0?
- Is this likely to be resolved in future updates, or should I continue using PyTorch 2.3.1 for optimal performance?
Any insights into this performance discrepancy or suggestions for using Flux models with the latest PyTorch version would be greatly appreciated. Thank you for your time and assistance.
Logs
No response
Other
No response