Shiwei Zhang
Shiwei Zhang
Hi, can you share me the log after change the [code](https://github.com/ali-vilab/i2vgen-xl/blob/main/utils/video_op.py#L205C19-L205C19) ``` cmd = f'ffmpeg -y -f image2 -loglevel quiet -framerate {save_fps} -i {frame_dir}/%04d.png -vcodec libx264 -crf 17 -pix_fmt yuv420p...
Hi, you can try: ``` sudo yum install ffmpeg ffmpeg-devel -y ``` and then test it with: ``` ffmpeg -h ```
Thank you for your valuable suggestions. Stable Video Diffusion is a powerful model, and we also intend to compare and analyze our method with SVD, and we will make these...
Currently, we have only developed and validated on A100. At the moment, we are preparing machines with V100 and A10, and we hope to eventually be compatible with these three...
The technical report for I2VGen-XL is currently being written, which will cover more details. As for the outpainting task, there are currently no plans for this expansion. However, implementing this...
The code and model both support it, please refer to the code [line](https://github.com/damo-vilab/videocomposer/blob/main/tools/videocomposer/unet_sd.py#L1581). You can pass the masked video tensor as an argument.
Thank you for your interest. We apologize that we do not currently have plans to publicly release the training code. However, you can refer to our technical paper for the...
Thank you for your attention. We apologize that we currently do not have plans to release the code for this training portion. However, we will soon be releasing more video...
We did not release our data this time. However, you can use the motion vectors as a reference to construct similar movements and generate videos with corresponding text input.
Thank you for your interest in our work. We didn't include the data with the code release, but you can try with your own data. Recently, we have also released...