StreamingT2V
StreamingT2V copied to clipboard
Warning message: Enabling CPU offloading option for models
File: model_init.py
pipe.enable_model_cpu_offload() return pipe.to(device)
It seems after enabling CPU offloading option, model is send to CUDA device. It is done so in a number of model initializations. It seems the correct option would be:
pipe.enable_model_cpu_offload() return pipe
For cases, where model offloading is not done: return pipe.to(device)
Thanks,