video-diffusion-pytorch icon indicating copy to clipboard operation
video-diffusion-pytorch copied to clipboard

Better Co-training Images and Video Method In imagen-video Paper

Open lxj616 opened this issue 2 years ago • 0 comments

As you request in README, I'm here to open a issue about "how this is done", you must have seen the paper already

We follow Ho et al. (2022b) in jointly training all the models in the Imagen Video pipeline on images and videos. During training, individual images are treated as single frame videos. We achieve this by packing individual independent images into a sequence of the same length as a video, and bypass the temporal convolution residual blocks by masking out their computation path. Similarly, we disable cross-frame temporal attention by applying masking to the temporal attention maps. This strategy allows us to use to train our video models on image-text datasets that are significantly larger and more diverse than available video-text datasets.

https://imagen.research.google/video/paper.pdf

lxj616 avatar Oct 14 '22 03:10 lxj616