StreamingT2V
StreamingT2V copied to clipboard
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
The issue with rm persists even after I modified the code related to it. I hope a compatible version will be released soon.
how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?
I just saw the VRAM requirements for this. Its clearly listed in README, but somehow I missed it. Sadly downloaded a ton of things and waited for a while. I...