DiT
DiT copied to clipboard
[DiT video]
Dear authors,
Do you have a video-version DiT implementation? I believe a video-version DiT implementation is crucial for the community!
Thanks!
Incorporating elements similar to SORA into this architecture should be feasible: https://openai.com/research/video-generation-models-as-world-simulators. This would involve adding the time dimension to the patches. But probably incorporating text prompts would also be a big step.
I'm currently working on implementing these ideas in my free time and would welcome collaboration. If you're interested in exploring this journey of creating an open source SORA with me, please join the discussion on Discord: https://discord.gg/2WZMXfnq
Hi the link is expired, and can you share the discord invite again?
Yes i'd be interested in collaboration too. I already setup a ViVit ( video vision transformer) architecture with this DiT as a reference.
If you look at Sora they also reference the ViVit paper.
To incorporate text you would add a cross attention layer for both spatial and temporal DiT blocks.
in ViviT they use factorized attention ( a spatial followed by a temporal block). And utilizing 3D conv embeddings. Pretty standard for ViviT I don't believe Sora is any different.
They use CLIP for text embeddings and prompts.
@FeSens I recommend you look at this
https://arxiv.org/abs/2103.15691
and Googles Jax implementation
However, the compute required for this even in latent space is pretty large. Also, I think you will need to use the positional encoding of a pre-trained ViT for this because its very hard to find labeled video data that is annotated. So you will need to train less data with some pre-trained weights.