bark
bark copied to clipboard
Enable MPS (if available) by default on Mac
MPS significantly speeds up performance. This PR would enable it by default if it is available. This allow inference without having to manually enable MPS.
Hi @fakerybakery, I'd suggest set this up as your local env variable, since not every user uses mac and this probably shouldn't be true as default for them.
Just do:
export SUNO_ENABLE_MPS=true
In the terminal that you start your python. You can also just add it to your bash_profile
.
Hello!
I declared in the Terminal:
export SUNO_ENABLE_MPS=true
That did not help and my code still runs on CPU.
Don't we rather need to add a:
device = "mps"
with some XXX..to(device)
here and there ?
Thank you !!
O.
I have the following set, and it goes to GPU,
import numpy as np
from scipy.io import wavfile
import os
os.environ["SUNO_ENABLE_MPS"] = "True"
os.environ["SUNO_USE_SMALL_MODELS"] = "False"
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"]="1"
os.environ["SUNO_OFFLOAD_CPU"] = "False"
from bark.generation import (
generate_text_semantic,
preload_models,
)
from bark.api import semantic_to_waveform
from bark import generate_audio, SAMPLE_RATE
from scipy.io.wavfile import write as write_wav
Dear @QueryType ,
THANK YOU!! Super to answer me so quicky!! And... happy new year!
I am still running on CPU:
—— I am using M1 Max, MacOS Ventura 13.6.3, python 3.11.7 (yes, not 3.10), torch 2.1.2, torchaudio 2.1.2, transformers 4.36.2, optimum 1.16.1 and suno-bark @ suno-bark @ git+https://github.com/suno-ai/bark.git.
—— I still run on CPU when using bark's generate_audio
or bark.api's semantic_to_waveform
and text_to_semantic
.
I did not change my code and just added:
os.environ["SUNO_ENABLE_MPS"] = "True"
os.environ["SUNO_USE_SMALL_MODELS"] = "False"
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"]="1"
os.environ["SUNO_OFFLOAD_CPU"] = "False"
No error message. Just a warning saying I am running on CPU.
.to("mps")
does not help even if "SUNO_ENABLE_MPS" indeed set to 1.
—— I however get the GPU directly running processor
using XXX.to("mps").
After a little while, I then get the error:
_The operator 'aten::weight_norm_interface' is not currently implemented for the MPS device.
Even with "PYTORCH_ENABLE_MPS_FALLBACK=1"
I saw the issue was apparently not really clarified in that forum.
Dear @QueryType ,
THANK YOU!! Super to answer me so quicky!! And... happy new year!
I am still running on CPU:
—— I am using M1 Max, MacOS Ventura 13.6.3, python 3.11.7 (yes, not 3.10), torch 2.1.2, torchaudio 2.1.2, transformers 4.36.2, optimum 1.16.1 and suno-bark @ suno-bark @ git+https://github.com/suno-ai/bark.git.
—— I still run on CPU when using bark's
generate_audio
or bark.api'ssemantic_to_waveform
andtext_to_semantic
. I did not change my code and just added:os.environ["SUNO_ENABLE_MPS"] = "True" os.environ["SUNO_USE_SMALL_MODELS"] = "False" os.environ["PYTORCH_ENABLE_MPS_FALLBACK"]="1" os.environ["SUNO_OFFLOAD_CPU"] = "False"
No error message. Just a warning saying I am running on CPU.
.to("mps")
does not help even if "SUNO_ENABLE_MPS" indeed set to 1.—— I however get the GPU directly running
processor
using XXX.to("mps"). After a little while, I then get the error: _The operator 'aten::weight_norm_interface' is not currently implemented for the MPS device. Even with "PYTORCH_ENABLE_MPS_FALLBACK=1" I saw the issue was apparently not really clarified in that forum.
We need to set these environment variables "before" importing torch or any such library. I hope you have that. So make sure to import os first and then set up immediately as I show above. Happy new year!!
Dear @QueryType ,
I wish you a happy new year! Yes indeed you were right and it now works.
I did not: set the OS environment BEFORE importing Pytorch.
Thank you very much for the great help!
O.