can i run gradio_app.py on my local RTX4090 machine?
i run the gradio_app.py on ubuntu with RTX4090,when i upload the image and generate the video it render error as below:
RuntimeError:CUDA out of memory.Tried to allocate 1.07 GiB (GPU 0; 23.65 GiB total capacity; 18.79 GiB already allocated; 671.44MiB free;20.70 GiB reserved in total by Pytorch)
how can i run successfully on my local machine?
Hi, thanks for your attention to our work. You can try the solution here.
Hi, thanks for your attention to our work. You can try the solution here.
i can run inference when i set model = model.half().to(gpu), which i mean i don't know how to run gradio succesfully in local,i just figure out there is no pretrained model yet,just use the pipeline
Hi, thanks for your attention to our work. You can try the solution here.
btw,can we use accelerate and deepspeed zero 3 if we wanna to save GPU for inference?any solution
Hi, thanks for your attention to our work. You can try the solution here.
i thought this issue can be closed,i just figure out run gradio on RTX 4090 local machine with cog predict,thx