piper icon indicating copy to clipboard operation
piper copied to clipboard

Does piper support AMD GPU acceleration with rocm?

Open eliranwong opened this issue 1 year ago • 17 comments

Does piper support AMD GPU acceleration with rocm?

eliranwong avatar Apr 29 '24 11:04 eliranwong

I can see onnx runtime support AMD rocm, please read:

https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-onnx.html

But how to get it integrated into piper?

eliranwong avatar Apr 29 '24 11:04 eliranwong

@eliranwong

From what I know, you need to modify and rebuild Piper, setting the execution-providers to your chosen ones. Also, you need to provide onnxruntime build with your providers.

Best Musharraf

mush42 avatar Apr 29 '24 15:04 mush42

Appreciate your reply and help. May I ask for more information about:

  1. how to set execution-providers for rebuilding piper
  2. how to provide onnxruntime build with your providers

eliranwong avatar Apr 29 '24 18:04 eliranwong

Do you mean I need to manually edit this line:

https://github.com/rhasspy/piper/blob/078bf8a17e24ebb18332710354c0797872dcef6a/src/python_run/piper/voice.py#L53

eliranwong avatar Apr 29 '24 19:04 eliranwong

@eliranwong

  1. Since Piper doesn't provide a command-line option to set onnxruntime EP to ROCM, you need to modify Piper's C++ code to set it manually in the source.
  2. Find a library onnxruntime.so built with ROCM EP, if pre-built binaries are not provided by Microsoft, you need to build it yourself.

mush42 avatar Apr 29 '24 22:04 mush42

So far, below is the easiest way that I found:

  1. Install ONNX Runtime with ROCm Execution Provider reference: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu#22-onnx-runtime-with-rocm-execution-provider
# pre-requisites
pip install -U pip
pip install cmake onnx
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Install ONNXRuntime from source
git clone --recursive  https://github.com/ROCmSoftwarePlatform/onnxruntime.git
cd onnxruntime
git checkout rocm6.0_internal_testing

./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
pip install build/Linux/Release/dist/*
  1. Manually edit this line:

https://github.com/rhasspy/piper/blob/078bf8a17e24ebb18332710354c0797872dcef6a/src/python_run/piper/voice.py#L53

to:

providers=["ROCMExecutionProvider"]

I am open to better solution.

I would appreciate if the author of piper and support it directly, so that I don't need to manually edit the line.

Many thanks.

eliranwong avatar Apr 30 '24 09:04 eliranwong

I read piper currently support --cuda argument. I would suggest to add --rocm argument to make piper better.

eliranwong avatar Apr 30 '24 09:04 eliranwong

Update: Created a pull request to add --migraphx and --rocm options to support AMD / ROCm-enabled GPUs.

If the pull request is merged, AMD-GPUs users can run piper either 'piper --migraphx' or 'piper --rocm'.

Before the pull request is merged, AMD-GPUs users can still workaround the issue with the following setup:

To support ROCm-enabled GPUs via 'ROCMExecutionProvider' or 'MIGraphXExecutionProvider':

  1. Install piper-tts

pip install piper-tts

  1. Uninstall onnxruntime

pip uninstall onnxruntime

  1. Install onnxruntime-rocm

pip3 install https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0.2/onnxruntime_rocm-inference-1.17.0-cp310-cp310-linux_x86_64.whl --no-cache-dir

Remarks: Wheel files that support different ROCm versions are available at: https://repo.radeon.com/rocm/manylinux

To verify:

python3

$ import onnxruntime
$ onnxruntime.get_available_providers()

Output:

['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']

Workaround:

Manually edit the 'load' function in the file ../site-packages/piper/voice.py:

From:

providers=["CPUExecutionProvider"]
if not use_cuda
else ["CUDAExecutionProvider"],

To:

providers=["MIGraphXExecutionProvider"],

eliranwong avatar May 28 '24 08:05 eliranwong

Does this also work for training a new voice model (via the python3 -m piper_train --accelerator 'gpu' flag)?

SephGER avatar Jun 05 '24 11:06 SephGER

Does this also work for training a new voice model (via the python3 -m piper_train --accelerator 'gpu' flag)?

The PR works for inference. I haven't touched training part.

eliranwong avatar Jun 25 '24 23:06 eliranwong

I also wrote a tutorial to work with iGPU:

https://discuss.linuxcontainers.org/t/run-offline-tts-with-amd-gpu-acceleration-in-an-incus-container/20273

eliranwong avatar Jun 25 '24 23:06 eliranwong

Does this also work for training a new voice model (via the python3 -m piper_train --accelerator 'gpu' flag)?

I think for training it won't work, as piper is using PyTorch 1 and PyTorch only supports ROCm starting PyTorch2 (and only ROCm between 5.2 and 6.0)

I am looking for an alternative on my end. I was digging into bark from suno.ai but not sure how to yet. Edit: Actually https://docs.coqui.ai/en/dev/index.html Coqui.ai seems to be a good candidate

SonnyAD avatar Jun 26 '24 15:06 SonnyAD

PyTorch only supports ROCm starting PyTorch2 (and only ROCm between 5.2 and 6.0)

Minor correction: PyTorch is up to support ROCm 6.1.3, the official AMD package. You may read https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu#install-pytorch

eliranwong avatar Jun 26 '24 22:06 eliranwong

Has anyone figured out how to train a model with AMD GPU? I get this error might be related

self._accelerator_flag = self._choose_gpu_accelerator_backend() File "/home/ishaan/Music/training/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 518, in _choose_gpu_accelerator_backend raise MisconfigurationException("No supported gpu backend found!") pytorch_lightning.utilities.exceptions.MisconfigurationException: No supported gpu backend found!

Digitalpeer1 avatar Jan 23 '25 19:01 Digitalpeer1