StoryDiffusion icon indicating copy to clipboard operation
StoryDiffusion copied to clipboard

About some details of paper

Open YahooKID opened this issue 9 months ago • 1 comments

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

YahooKID avatar May 11 '24 02:05 YahooKID

Crickets for some reason, I'd like to know also.

311-code avatar May 17 '24 06:05 311-code

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

Hi,

Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188

Regards, Zhou Daquan

zhoudaquan avatar May 21 '24 12:05 zhoudaquan

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

Hi,

Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188

Regards, Zhou Daquan

thanks.

YahooKID avatar May 22 '24 07:05 YahooKID

@zhoudaquan Hi. Thank you for your great work! I try to reproduce your code. Can you please help me to clear some details about your work: Do you use consistent self-attention for video training? At the 6th page there is a picture with architecture. There said that you compress image (2xHxWx3) into a semantic space 2xNxC, What is n? 257 (clip output) or 1 (linear projection) What is sequence length for motion transformer? If it is FxN, what is N?

Looking forward for your answer

armored-guitar avatar May 22 '24 12:05 armored-guitar

We encode a single image as N token vectors to represent different semantic information. Then we perform prediction. Thus, each intermediate frame corresponds to N tokens in the semantic space.

Z-YuPeng avatar May 30 '24 10:05 Z-YuPeng