Video-ChatGPT
Video-ChatGPT copied to clipboard
why i use A100 80G to inference so slow?
I downloaded all the files and data locally, but the graphics card memory did not increase when loaded, and the reasoning was very slow, especially when loading initialize_model functions
, and now I don't have a complete single_video_inference.py file