StreamingT2V
StreamingT2V copied to clipboard
how to run inference on multi GPUs
how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?
HI @AllenDun, thank you for your interest in our project.
There is currently no multi GPU implementation. We are working on reducing the memory requirements.