piper
piper copied to clipboard
Does piper support AMD GPU acceleration with rocm?
Does piper support AMD GPU acceleration with rocm?
I can see onnx runtime support AMD rocm, please read:
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-onnx.html
But how to get it integrated into piper?
@eliranwong
From what I know, you need to modify and rebuild Piper, setting the execution-providers to your chosen ones.
Also, you need to provide onnxruntime build with your providers.
Best Musharraf
Appreciate your reply and help. May I ask for more information about:
- how to set execution-providers for rebuilding piper
- how to provide onnxruntime build with your providers
Do you mean I need to manually edit this line:
https://github.com/rhasspy/piper/blob/078bf8a17e24ebb18332710354c0797872dcef6a/src/python_run/piper/voice.py#L53
@eliranwong
- Since Piper doesn't provide a command-line option to set onnxruntime EP to
ROCM, you need to modify Piper's C++ code to set it manually in the source. - Find a library
onnxruntime.sobuilt withROCMEP, if pre-built binaries are not provided by Microsoft, you need to build it yourself.
So far, below is the easiest way that I found:
- Install ONNX Runtime with ROCm Execution Provider reference: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu#22-onnx-runtime-with-rocm-execution-provider
# pre-requisites
pip install -U pip
pip install cmake onnx
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install ONNXRuntime from source
git clone --recursive https://github.com/ROCmSoftwarePlatform/onnxruntime.git
cd onnxruntime
git checkout rocm6.0_internal_testing
./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
pip install build/Linux/Release/dist/*
- Manually edit this line:
https://github.com/rhasspy/piper/blob/078bf8a17e24ebb18332710354c0797872dcef6a/src/python_run/piper/voice.py#L53
to:
providers=["ROCMExecutionProvider"]
I am open to better solution.
I would appreciate if the author of piper and support it directly, so that I don't need to manually edit the line.
Many thanks.
I read piper currently support --cuda argument. I would suggest to add --rocm argument to make piper better.
Update: Created a pull request to add --migraphx and --rocm options to support AMD / ROCm-enabled GPUs.
If the pull request is merged, AMD-GPUs users can run piper either 'piper --migraphx' or 'piper --rocm'.
Before the pull request is merged, AMD-GPUs users can still workaround the issue with the following setup:
To support ROCm-enabled GPUs via 'ROCMExecutionProvider' or 'MIGraphXExecutionProvider':
- Install piper-tts
pip install piper-tts
- Uninstall onnxruntime
pip uninstall onnxruntime
- Install onnxruntime-rocm
pip3 install https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0.2/onnxruntime_rocm-inference-1.17.0-cp310-cp310-linux_x86_64.whl --no-cache-dir
Remarks: Wheel files that support different ROCm versions are available at: https://repo.radeon.com/rocm/manylinux
To verify:
python3
$ import onnxruntime
$ onnxruntime.get_available_providers()
Output:
['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
Workaround:
Manually edit the 'load' function in the file ../site-packages/piper/voice.py:
From:
providers=["CPUExecutionProvider"]
if not use_cuda
else ["CUDAExecutionProvider"],
To:
providers=["MIGraphXExecutionProvider"],
Does this also work for training a new voice model (via the python3 -m piper_train --accelerator 'gpu' flag)?
Does this also work for training a new voice model (via the
python3 -m piper_train --accelerator 'gpu'flag)?
The PR works for inference. I haven't touched training part.
I also wrote a tutorial to work with iGPU:
https://discuss.linuxcontainers.org/t/run-offline-tts-with-amd-gpu-acceleration-in-an-incus-container/20273
Does this also work for training a new voice model (via the
python3 -m piper_train --accelerator 'gpu'flag)?
I think for training it won't work, as piper is using PyTorch 1 and PyTorch only supports ROCm starting PyTorch2 (and only ROCm between 5.2 and 6.0)
I am looking for an alternative on my end. I was digging into bark from suno.ai but not sure how to yet. Edit: Actually https://docs.coqui.ai/en/dev/index.html Coqui.ai seems to be a good candidate
PyTorch only supports ROCm starting PyTorch2 (and only ROCm between 5.2 and 6.0)
Minor correction: PyTorch is up to support ROCm 6.1.3, the official AMD package. You may read https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu#install-pytorch
Has anyone figured out how to train a model with AMD GPU? I get this error might be related
self._accelerator_flag = self._choose_gpu_accelerator_backend() File "/home/ishaan/Music/training/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 518, in _choose_gpu_accelerator_backend raise MisconfigurationException("No supported gpu backend found!") pytorch_lightning.utilities.exceptions.MisconfigurationException: No supported gpu backend found!