piper
piper copied to clipboard
Accelerate with AMD GPUs
This pull request is created to extend piper support of AMD GPUs or ROCm-enabled GPUs via 'ROCMExecutionProvider' or 'MIGraphXExecutionProvider'.
Original issue is recorded at: https://github.com/rhasspy/piper/issues/483
To support ROCm-enabled GPUs via 'ROCMExecutionProvider' or 'MIGraphXExecutionProvider':
- Install piper-tts
pip install piper-tts
- Uninstall onnxruntime
pip uninstall onnxruntime
- Install onnxruntime-rocm
pip3 install https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0.2/onnxruntime_rocm-inference-1.17.0-cp310-cp310-linux_x86_64.whl
Remarks: Wheel files that support different ROCm versions are available at: https://repo.radeon.com/rocm/manylinux
To verify:
python3
$ import onnxruntime
$ onnxruntime.get_available_providers()
Output:
['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
To accelerate with AMD GPUs:
piper --migraphx
To accelerate with ROCm-enabled GPUs:
piper --rocm
Remarks: Tested on Ubuntu 22.04.4 + Kernel 6.6.32 + ROCm 6.0.2
Setup notes are available at: https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu/tree/main
This seems like a legit Pull Request. Having this Pull request merged, would be appreciated.
Can you show benchmarks using this solution?
crickets
It appears that the developer does not care much about AMD users.
It appears that the developer does not care much about AMD users.
I'd understand the lack of interest in AMD if he's a sole maintainer who uses nvidia, but I don't get why he'd ignore such a well prepared PR to improve the capabilities of piper! I think I'll use your fork @eliranwong for now.
Has anyone got training to work with AMD GPU yet?
Has anyone got training to work with AMD GPU yet?
I got it working in AMD rocm docker environment but my old RTX2080S outperforms my new 7900xtx GPU. Trying to figure out how to get it working better with AMD GPU.
Has anyone got training to work with AMD GPU yet?
I got it working in AMD rocm docker environment but my old RTX2080S outperforms my new 7900xtx GPU. Trying to figure out how to get it working better with AMD GPU.
I'm unsure how to get this working as I'm kind of a noob. I was following along with the NetworkChuck tutorial (https://blog.networkchuck.com/posts/how-to-clone-a-voice/) and got everything to work except the GPU acceleration as I have a 7900xtx.
Where in the above tutorial would I go through the steps in this pull request?
Development has moved: https://github.com/OHF-Voice/piper1-gpl
What do you think about just enabling all of the GPU providers if --cuda is passed?
Development has moved: https://github.com/OHF-Voice/piper1-gpl
What do you think about just enabling all of the GPU providers if
--cudais passed?
maybe change --cuda to --gpu for future-proof integration?
Could it be possible to have differents and ready to use docker images of wyoming-piper with cpu/cuda/rocm ?