OutOfMemoryError
The repository said that the model was successfully launched on GeForce RTX 3050 Ti Laptop GPU with 4GB VRAM. I have rtx 4050 with 6GB VRAM and got a error:
python app.py --use_float16
All required model files exist.
2025-05-06 05:02:05.584729: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-05-06 05:02:06.253567: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth
cuda start
An error occurred while trying to fetch models/sd-vae: Error no file named diffusion_pytorch_model.safetensors found in directory models/sd-vae.
Defaulting to unsafe serialization. Pass allow_pickle=False to raise an error instead.
load unet model from ./models/musetalkV15/unet.pth
Traceback (most recent call last):
File "/home/avgust/MuseTalk/app.py", line 392, in
I don't know what the problem could be. I use Ubuntu 25.04
Also i tried "sh inference.sh v1.5 normal" and "sh inference.sh v1.0 normal". the result is the same.
Hi @AugustLigh, can you check if your GPU has full available memory while running the program? I mean, make sure no other programs are using the GPU memory so that it has the entire memory available for this task. Our GPU has 16GB of shared memory, but it needs to be fully available for the program to run smoothly.