Yujie Wei
Yujie Wei
> > Hi guys, the code is released. > > Hello, thanks for releasing the code of Dreamvideo, but I am not sure about the following questions: > > 1....
> > > > Hi guys, the code is released. > > > > > > > > > Hello, thanks for releasing the code of Dreamvideo, but I am...
@XiaominLi1997 We have released the model weights and provided some examples, please refer to the latest code.
Thanks for your interest. Yes, we feed a single image to the video diffusion model (i.e., the video only has 1 frame) for subject learning. For textual inversion, we freeze...
Thank you for your interest in DreamVideo. We are in the process of organizing the code and will make the code and models public as soon as possible.
The code is released.
> > The code is released. > > and when will the pretrained models? Hi, the models are released.
Hi, Thanks for your interest. We currently do not offer the dataset in our repository because it primarily consists of publicly available datasets. We will let you know here if...
你好, 1. open_clip_pytorch_model.bin和v2-1_512-ema-pruned.ckpt 和VGen中其他project例如TF-T2V,InstructVideo使用的是一致的,你可以从 https://modelscope.cn/models/iic/tf-t2v/file/view/master/open_clip_pytorch_model.bin 和 https://modelscope.cn/models/iic/tf-t2v/file/view/master/v2-1_512-ema-pruned.ckpt 进行下载。 2. 请你仔细检查安装的环境是否遵循VGen的安装说明 https://github.com/ali-vilab/VGen?tab=readme-ov-file#installation 以及检查模型权重是否被正确加载。
Hello, thank you for your interest in our work. You can refer to the detailed description of the metric calculations in our paper. If we update the related code in...