ComfyUI-IDM-VTON
ComfyUI-IDM-VTON copied to clipboard
It is recommended to add support for fp16 model to save memory
https://huggingface.co/IDM-VTON-F16 The previous 12 g model memory requirements can be changed to 5 g model size. Will also allow and use better! What do you think?
hey! the link seems to be broken... the current implementation is already loading the model in torch.float16, I will have on how to load in 8bits
Are you running on MPS accelerator?
This VRAM usage info might be useful:
From: https://github.com/yisol/IDM-VTON/issues/47