Amphion icon indicating copy to clipboard operation
Amphion copied to clipboard

[Help]: mac mini m4 can't run it

Open weigeloveu opened this issue 9 months ago • 3 comments

(maskgct) ww w@mini Amphion % python -m models.tts.maskgct.gradio_demo          
./models/tts/maskgct/g2p/sources/g2p_chinese_model/poly_bert_model.onnx
/Users/www/miniconda3/envs/maskgct/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:118: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CoreMLExecutionProvider, AzureExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Traceback (most recent call last):
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/www/Amphion/models/tts/maskgct/gradio_demo.py", line 23, in <module>
    from models.tts.maskgct.g2p.g2p_generation import g2p, chn_eng_g2p
  File "/Users/www/Amphion/models/tts/maskgct/g2p/g2p_generation.py", line 10, in <module>
    from models.tts.maskgct.g2p.utils.g2p import phonemizer_g2p
  File "/Users/www/Amphion/models/tts/maskgct/g2p/utils/g2p.py", line 17, in <module>
    phonemizer_zh = EspeakBackend(
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/site-packages/phonemizer/backend/espeak/espeak.py", line 45, in __init__
    super().__init__(
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/site-packages/phonemizer/backend/espeak/base.py", line 39, in __init__
    super().__init__(
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/site-packages/phonemizer/backend/base.py", line 77, in __init__
    raise RuntimeError(  # pragma: nocover
RuntimeError: espeak not installed on your system
(maskgct) www@mini Amphion % 

weigeloveu avatar Mar 11 '25 11:03 weigeloveu

solved by add path to environment

export PHONEMIZER_ESPEAK_LIBRARY="/usr/local/Cellar/espeak-ng/版本号/lib/libespeak-ng.dylib"
export ESPEAK_DATA_PATH="/usr/local/Cellar/espeak-ng/版本号/share/espeak-ng-data"
export DYLD_LIBRARY_PATH="/usr/local/Cellar/pcaudiolib/版本号/lib"  # 确保 pcaudiolib 依赖已安装

but has new error idk how to resolve it ,big olds help me plz

(maskgct) www@mini Amphion % python -m models.tts.maskgct.gradio_demo
./models/tts/maskgct/g2p/sources/g2p_chinese_model/poly_bert_model.onnx
/Users/www/miniconda3/envs/maskgct/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:118: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CoreMLExecutionProvider, AzureExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Start loading: facebook/w2v-bert-2.0
Traceback (most recent call last):
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/www/miniconda3/envs/maskgct/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/www/Amphion/models/tts/maskgct/gradio_demo.py", line 31, in <module>
    device = torch.device("mps" if torch.cuda.is_available() else "CPU")
RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: CPU

weigeloveu avatar Mar 11 '25 12:03 weigeloveu

replace "CPU" by "cpu"

romain130492 avatar Apr 22 '25 03:04 romain130492

The error points to this line in gradio_demo.py: device = torch.device("mps" if torch.cuda.is_available() else "CPU") # <-- Error here! Please change the else "CPU" part to else "cpu". Try changing the line to this: device = torch.device("mps" if torch.cuda.is_available() else "cpu")

BeRiriJ avatar May 29 '25 13:05 BeRiriJ