LeoXing1996
LeoXing1996
Hey @YAY-3M-TA3, when performing inference with mps, what precision did you use? `torch.float16` or `torch.float32`?
Refers to https://huggingface.co/docs/diffusers/optimization/mps, maybe you can use `attention_slicing` in your inference.
@zekicinar Thank you for your interest in our work, and I apologize for the delayed response. Could you please specify the commands you executed when generating the video, such as...
@anthonyyuan Thank you for your interest in this project. This error appears to be caused by a mismatched `transformers` version. We recommend using `transformers==4.25.1` for PIA. You can check your...