zou kai
zou kai
You can try upgrading peft to 0.14.0, along with transformers==4.48.3 and torch==1.13.1
We followed the settings provided in the [official GitHub repo](https://github.com/THUDM/CogVideo?tab=readme-ov-file#diffusers), using the default parameter values defined in [`inference/cli_demo.py`](https://github.com/THUDM/CogVideo/blob/main/inference/cli_demo.py) as a reference — only modifying the following arguments: `height=480`, `width=720`, `num_frames=49`,...
Sorry for the inconvenience! Here's the link to the sampled video output: https://drive.google.com/file/d/1FSAccPXyJR_uw5ldkQJAMzIVphLRuh39/view?usp=drive_link
Could you provide a list of your conda environments? @Cuogeihong
We used the pip list you provided and still got the same results as shown on the leaderboard. Could you please share the exact command you used, as well as...
Hi, thanks for your question. First, we used YOLO-World to detect three categories of object patches from all video frames. To remove redundancy, we computed SSIM between patches from the...
Thanks for your feedback. In [sampled_videos](https://github.com/Vchitect/VBench/tree/master/sampled_videos), we maintain a table that lists all the video models we have sampled and the Google Drive links where the results are stored. We...