Shiwei Zhang

Results 73 comments of Shiwei Zhang

Thank you for your interest in our work. We are planning to open-source the training code within this month, along with other works. Moreover, reproducing it is not very complicated....

I deleted your previous comment. On one hand, we need time to integrate our code and models because there are many differences between our internal code and the open-source code....

Hello, thanks for your attention and suggestions. We did not load the optimizer during the resuming process, as we usually find that it only takes a few optimization steps to...

Hello, this open-source model currently does not support other resolutions. Thank you for your interest. In the future, our work will support different resolutions.

Hi, you can train your model with: `python train.py --cfg configs/t2v_train.yaml`, but you should customize your own dataset format first.

Please refer to the [toy dataset](https://github.com/ali-vilab/VGen/blob/main/data/vid_list.txt) and [example config](https://github.com/ali-vilab/VGen/blob/main/configs/t2v_train.yaml#L9)

Hello, thank you for your interest in our work. We have open-sourced the single-stage I2VGen-XL model here, which is capable of fully retaining the content of the input images. The...

Hi, thanks for your interest in our work. You can refer to [this link](https://github.com/ali-vilab/i2vgen-xl/issues/26) for now to address your concerns. I haven't done a thorough validation yet, but based on...

Hi, you can refer to [here](https://github.com/ali-vilab/i2vgen-xl/issues/31)

[Modelscope text-to-video technical report](https://arxiv.org/abs/2308.06571)