onediff
onediff copied to clipboard
[Feature Request] fp8 support
If the model can be loaded in FP8, it seems to significantly reduce the vram requirements. SDXL only requires 3g of vram to run.
https://github.com/comfyanonymous/ComfyUI/blob/18c151b3e3f6838fab4028e7a8ba526e30e610d3/comfy/model_management.py#L495
fp8 is not supported for the moment. But it's on the roadmap(May take some months in our new version).
Ok, thank you very much.