Shiwei Zhang
Shiwei Zhang
Hi, thank you for your interest in our approach. We are currently working on the V2 version, which will support 16:9 video output and higher resolutions. We will release further...
I guess there may be two possible reasons: 1) Your GPU memory may not be sufficient. The current model inference requires 28G of GPU memory. Please check your machine; 2)...
Thank you for your interest in our method. We are currently optimizing the V2 version, and the watermark-free version of V1 is already available (https://www.modelscope.cn/models/damo/VideoComposer/files). The UI interaction has also...
You need to download the model files we provide first, as described in the instructions.
Hello, our model supports both videos and single frame as inputs. When inputting single frame, the value of F is set to 1. As long as the dimensions within each...
Thank you for your interest in our work. We apologize that we currently do not have any plans to release the code for the training portion.
Thank you for your interest in our work. We did not open-source the corresponding code this time. However, you can try to construct the relevant data yourself and directly replace...
Hi, you can refer to this [line](https://github.com/damo-vilab/videocomposer/blob/main/tools/videocomposer/unet_sd.py#L1234C43-L1234C43), and will find that we do not share weights between different conditions.
Hi, it seems that you are using a single sketch instead of a single static image as the condition for video generation, which results in a different outcome from what...
Hello, here we employ the offset noise to enhance the quality of the videos, as a common training method we use.