onediff
onediff copied to clipboard
CPU Model Offloading
Does oneflow support model offloading like pipe.to('cpu') while all the graphs are being loaded ?
If anyone can please answer this @doombeaker
offloading like pipe.to('cpu') is not currently supported.
We are working on a feature like this( CPU offload), but it will not be ready to use quickly.
Please tell us why you want this feature, and has any tool provided this feature?
I need this feature, since I have 8 Gigs GPU and I cannot load multiple models within the GPU. I have to move models to cpu to load any other model. This thing is easily doable with diffusers like this : pipe.enable_model_cpu_offload(). So I was wondering if that is possible with oneflow Thanks
This thing is easily doable with diffusers like this : pipe.enable_model_cpu_offload().
This feature is not ready currently. We are thinking about it.
How much time this can take ?
@strint If you can please answer
I'll leave a comment as I need this mechanism too
This issue is too old to follow and will be closed. Feel free to reopen it and continue.