MuseTalk icon indicating copy to clipboard operation
MuseTalk copied to clipboard

OutOfMemoryError

Open AugustLigh opened this issue 7 months ago • 1 comments

The repository said that the model was successfully launched on GeForce RTX 3050 Ti Laptop GPU with 4GB VRAM. I have rtx 4050 with 6GB VRAM and got a error:

python app.py --use_float16 All required model files exist. 2025-05-06 05:02:05.584729: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-05-06 05:02:06.253567: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth cuda start An error occurred while trying to fetch models/sd-vae: Error no file named diffusion_pytorch_model.safetensors found in directory models/sd-vae. Defaulting to unsafe serialization. Pass allow_pickle=False to raise an error instead. load unet model from ./models/musetalkV15/unet.pth Traceback (most recent call last): File "/home/avgust/MuseTalk/app.py", line 392, in vae, unet, pe = load_all_model( File "/home/avgust/MuseTalk/musetalk/utils/utils.py", line 25, in load_all_model unet = UNet( File "/home/avgust/MuseTalk/musetalk/models/unet.py", line 48, in init self.model.to(self.device) File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 7 more times] File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/home/avgust/miniconda3/envs/MuseTalk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 5.65 GiB total capacity; 5.42 GiB already allocated; 44.75 MiB free; 5.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I don't know what the problem could be. I use Ubuntu 25.04

Also i tried "sh inference.sh v1.5 normal" and "sh inference.sh v1.0 normal". the result is the same.

AugustLigh avatar May 06 '25 01:05 AugustLigh

Hi @AugustLigh, can you check if your GPU has full available memory while running the program? I mean, make sure no other programs are using the GPU memory so that it has the entire memory available for this task. Our GPU has 16GB of shared memory, but it needs to be fully available for the program to run smoothly.

Image

zzzweakman avatar May 10 '25 05:05 zzzweakman