Jay Z. Wu
Jay Z. Wu
Hi folks, thank you for your great efforts in integrating Tune-A-Video to diffusers. We have made some updates to [our implementation](https://github.com/showlab/Tune-A-Video), resulting in improved consistency. We hope that these changes...
hi @Abhinay1997, have you been able to solve the problem? If not, could you share your code with me so that I can assist you further?
Thank you for the comment. Currently, Tune-A-Video only generates short video clips. The creation of long-form videos with complex dynamics is a challenging task, and we are actively exploring this...
Hi folks, we're excited to announce that we've made updates to our codebase that have resulted in improved video consistency. These enhancements would be beneficial for generating longer videos (e.g.,...
just wondering what kind of custom models you are using?
Hi @HyeonHo99, thank you for your interest in our work. Below are some comments regarding your questions: 1. Here we set the number of frames to 24 that the code...
We have recently expanded the evaluation benchmark in the Tune-A-Video paper and released a new benchmark for text-guided video editing, namely LOVEU-TGVE dataset. Please follow [this instruction](https://sites.google.com/view/loveucvpr23/track4?authuser=0#h.qxkmfzt41fbc) to download the...
使用的是默认的参数设置吗?可以调小学习率看loss是否正常。
Thanks @minhhieu50050 for being interested in our work! Our code is based on diffusers 0.11.1, you can find the latest version of diffusers [here](https://github.com/huggingface/diffusers).
please refer to https://github.com/showlab/Tune-A-Video/issues/56#issuecomment-1514221869