bark icon indicating copy to clipboard operation
bark copied to clipboard

No GPU being used

Open jueljust opened this issue 1 year ago • 3 comments

`/home/gucci/miniconda3/lib/python3.11/site-packages/torch/utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() /home/gucci/miniconda3/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.") /home/gucci/miniconda3/lib/python3.11/site-packages/transformers/models/encodec/modeling_encodec.py:120: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad(True), rather than torch.tensor(sourceTensor). self.register_buffer("padding_total", torch.tensor(kernel_size - stride, dtype=torch.int64), persistent=False)

NVIDIA-SMI 535.104.12 Driver Version: 535.104.12 CUDA Version: 12.2 pytorch 2.1.0 python 3.11 ubuntu 20.04.6

torch.cuda.is_available() return true

but no process found by nvidia-smi and the interface is very slow using more than 300 seconds to generate 4 seconds wav looks like no gpu acceleration

jueljust avatar Jun 18 '24 18:06 jueljust

`+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.104.12 Driver Version: 535.104.12 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla P4 On | 00000000:01:00.0 Off | 0 | | N/A 47C P8 7W / 75W | 0MiB / 7680MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ `

jueljust avatar Jun 18 '24 18:06 jueljust

Make sure you have NVidia Cuda drivers Installed. Then install required pytorch version from here DIRECTLY into the Bark folder. Similar to this pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 --target c:\AI\Bark-Voice\ --upgrade

aristides86 avatar Aug 11 '24 02:08 aristides86

I had to explicitly tell the model to use the GPU with device_map and .to(model.device) for the processor

model_name = 'suno/bark'
wav_processor = AutoProcessor.from_pretrained(model_name)
wav_model = BarkModel.from_pretrained(model_name, device_map='cuda', torch_dtype=torch.float32)
inputs = wav_processor(sentence,  voice_preset="v2/en_speaker6").to(wav_model.device)
audio_array  = wav_model.generate(**inputs, min).cpu().numpy().squeeze().tolist()

iv2985 avatar Oct 18 '24 20:10 iv2985