StoryDiffusion
StoryDiffusion copied to clipboard
About some details of paper
Thanks for open source such a great model, here is a little my own problem about some details of paper,
- About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
- About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?
Cheers
Crickets for some reason, I'd like to know also.
Thanks for open source such a great model, here is a little my own problem about some details of paper,
- About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
- About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?
Cheers
Hi,
Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188
Regards, Zhou Daquan
Thanks for open source such a great model, here is a little my own problem about some details of paper,
- About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
- About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?
Cheers
Hi,
Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188
Regards, Zhou Daquan
thanks.
@zhoudaquan Hi. Thank you for your great work! I try to reproduce your code. Can you please help me to clear some details about your work: Do you use consistent self-attention for video training? At the 6th page there is a picture with architecture. There said that you compress image (2xHxWx3) into a semantic space 2xNxC, What is n? 257 (clip output) or 1 (linear projection) What is sequence length for motion transformer? If it is FxN, what is N?
Looking forward for your answer
We encode a single image as N token vectors to represent different semantic information. Then we perform prediction. Thus, each intermediate frame corresponds to N tokens in the semantic space.