FanJ
FanJ
When I tried to extract this tar.gz file, it threw an error: gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now...
Thanks for your reply! I also want to confirm that you use the SAT version for evaluation right?
> yes Thanks a lot!
> > Could you provide the details of the model checkpoint and sampling setting? > > Model weights are download from https://huggingface.co/THUDM/CogVideoX-2b/tree/main here Inference code is inference/cli_demo.py from Cogvideo2B repo...
> > Could you provide the details of the model checkpoint and sampling setting? > > Model weights are download from https://huggingface.co/THUDM/CogVideoX-2b/tree/main here Inference code is inference/cli_demo.py from Cogvideo2B repo...
> See https://github.com/PKU-YuanGroup/Open-Sora-Plan/blob/main/scripts/text_condition/gpu/sample_t2v.sh Thank you! If I want to sample a video with size 93×1280×720, can I just modify the num_frames, height and width while keeping the other parameters unchanged?
> emm, num_frames needs to be changed to 161, export_to_video(video, output_path, fps=16). Also, have you tried whether five seconds is normal. I want to make sure that the setting for...
prompt: A focused individual sits at a sleek, modern desk in a dimly lit room, illuminated by the soft glow of a high-resolution computer screen. They wear a cozy, oversized...
And per 10-second video generation costs 40 min. Is this a reasonable duration in your experience? Looking forward to your reply!
I also tried to add `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`in `inference.sh`, but got error: ```ruby ############################## Sampling setting ############################## Sampler: VPSDEDPMPP2MSampler Discretization: ZeroSNRDDPMDiscretization Guider: DynamicCFG Sampling with VPSDEDPMPP2MSampler for 51 steps: 98%|█████████▊| 50/51 [44:23