SadTalker
SadTalker copied to clipboard
torch.cuda.OutOfMemoryError: CUDA out of memory.
Hello, I've installed webui with SadTalker but when processing an image with it, it shows an error :
The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument.
Traceback (most recent call last): File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\gradio\routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function( File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\gradio\blocks.py", line 884, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Program Files\ai\stable-diffusion-webui\Matalen\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Program Files\ai\stable-diffusion-webui\modules\call_queue.py", line 15, in f res = func(*args, **kwargs) File "C:\Program Files\ai\stable-diffusion-webui/extensions/SadTalker\src\gradio_demo.py", line 79, in test self.animate_from_coeff = AnimateFromCoeff(self.free_view_checkpoint, self.mapping_checkpoint, File "C:\Program Files\ai\stable-diffusion-webui/extensions/SadTalker\src\facerender\animate.py", line 61, in init self.load_cpk_facevid2vid(free_view_checkpoint, kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator) File "C:\Program Files\ai\stable-diffusion-webui/extensions/SadTalker\src\facerender\animate.py", line 88, in load_cpk_facevid2vid generator.load_state_dict(checkpoint['generator']) TypeError: 'NoneType' object is not subscriptable
I can't add the line disable-safe-unpickle to webui-user.bat Sorry I'm really new to all of this. Can someone help me through this please ?
Okay I figured it out. I didn't need to unzip the checkpoints. Now I got another issue :
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.38 GiB already allocated; 0 bytes free; 5.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What can I do please ?
It means you need more GPU memory. You could try to set batch_size to 1 when you run the inference
How can I do that please ? I can't write in cmd
Should I write it in the notepad for webui user. bat ?
You could find batch_size = 2
in src/gradio_demo.py
. Just change batch_size = 2
to batch_size = 1
maybe another way: set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128 in webui-user.bat. and reduce video freme, low 150.
Thanks for the answers ! :) I tried winfredy advice. The same issue occured. I'll try to add max_split_size. However, where do i reduce video frame please ?
Thanks for the answers ! :) I tried winfredy advice. The same issue occured. I'll try to add max_split_size. However, where do i reduce video frame please ?
Hello, have you solved your problem? I am currently facing the same issue.
Hello, I came to The conclusion that my GPU was not powerful enough to run Stable diffusion on its own. So I’m using the hugging face space…
At least I can make it work with it now.
On Thu 20 Apr 2023 at 09:46, chaai2 @.***> wrote:
Thanks for the answers ! :) I tried winfredy advice. The same issue occured. I'll try to add max_split_size. However, where do i reduce video frame please ?
Hello, have you solved your problem? I am currently facing the same issue.
— Reply to this email directly, view it on GitHub https://github.com/Winfredy/SadTalker/issues/196#issuecomment-1515875244, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7IQATIILNDRMHGVUJN2H2TXCDSW3ANCNFSM6AAAAAAXCFDMFI . You are receiving this because you authored the thread.Message ID: @.***>
Hello, I came to The conclusion that my GPU was not powerful enough to run Stable diffusion on its own. So I’m using the hugging face space… At least I can make it work with it now. … On Thu 20 Apr 2023 at 09:46, chaai2 @.> wrote: Thanks for the answers ! :) I tried winfredy advice. The same issue occured. I'll try to add max_split_size. However, where do i reduce video frame please ? Hello, have you solved your problem? I am currently facing the same issue. — Reply to this email directly, view it on GitHub <#196 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7IQATIILNDRMHGVUJN2H2TXCDSW3ANCNFSM6AAAAAAXCFDMFI . You are receiving this because you authored the thread.Message ID: @.> Okay, thank you.
Hi guys I have installed the stable-diffusion 1.5 version and its working fine I have also added the sadTalker now sadTalker is installed properly but I get one issue torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I have set the batch_size to 1 added set PYTORCH_CUDA_ALLOC_CONF= max_split_size_mb: 60 installed nvidea CUDA but none of the aabove works please guys I need help
This works for me, with 4GB GPU VRAM
add this to webui-user.bat
set COMMANDLINE_ARGS=--lowvram
@leovoon yeah like using --medvram or the --lowvram is usefull for me, but I wanted to maximize my gpu usage even for the --medvaram the speed is good enough also using --xformers helps but it reduces the creativity.
Got the same error with following info... torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.00 GiB total capacity; 1021.38 MiB already allocated; 56.43 MiB free; 1.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
@toniedeng Already set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128 in webui-user.bat But still got above error, any other suggestion? My GPU is NVIDIA GeForce MX110 , is it ok to run? My setting for ARGS=--lowvram
Thanks
Found that to run SadTalker locally, 8GB GPU is needed, or minimum is 4GB, so my NVIDIA GeForce MX110 is not possible.
Suggest to use CoLab, but be sure to folk and change "Winfredy" to yours, so you can manipulate with your own photos.
https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb