FunASR
FunASR copied to clipboard
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): No module named 'torch.distributed.tensor'
打算使用FunASR,运行UniASR实时模型【iic/speech_UniASR_asr_2pass-zh-cn-16k-audio_and_video-vocab3445-online】 UniASR魔塔官方帖子
❓ Questions and Help
Before asking:
- search the issues. (没有相关issues)
- search the docs. (查过没解决)
查看到只能用funasr0.8.8和modescope1.10.0仍没法使用 尝试解决依赖包版本冲突,查过对应时间的历史版本,仍报错: RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): No module named 'torch.distributed.tensor'
What is your question?
Code
官方推理测试代码:
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
# from modelscope.pipelines.asr_inference import ASRInferencePipeline
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
model='iic/speech_UniASR_asr_2pass-zh-cn-16k-audio_and_video-vocab3445-online')
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
print(rec_result)
What have you tried?
What's your environment?
- OS (e.g., Linux):
- FunASR Version (e.g., 1.0.0): 0.8.8
- ModelScope Version (e.g., 1.11.0): 1.10.0
- PyTorch Version (e.g., 2.0.0): 1.13.0
- How you installed funasr (
pip, source): 安装 pip install funasr==0.8.8 - Python version: 3.9.21
- GPU (e.g., V100M32) NVIDIA 4090
- CUDA/cuDNN version (e.g., cuda11.7): Build cuda_12.5.r12.5/compiler.34177558_0
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1) 本地部署
- Any other relevant information:
absl-py==2.2.2 accelerate==0.27.0 addict==2.4.0 aiohappyeyeballs==2.6.1 aiohttp==3.11.18 aiosignal==1.3.2 aliyun-python-sdk-core==2.16.0 aliyun-python-sdk-kms==2.16.5 async-timeout==5.0.1 attrs==25.3.0 audioread==3.0.1 certifi==2025.1.31 cffi==1.17.1 charset-normalizer==3.4.1 click==8.1.8 contourpy==1.3.0 crcmod==1.7 cryptography==44.0.2 cycler==0.12.1 datasets==3.5.0 decorator==5.2.1 diffusers==0.33.1 dill==0.3.6 editdistance==0.8.1 einops==0.8.1 filelock==3.18.0 fonttools==4.57.0 frozenlist==1.6.0 fsspec==2024.2.0 funasr==0.8.8 gast==0.6.0 grpcio==1.71.0 hdbscan==0.8.40 huggingface-hub==0.30.2 humanfriendly==10.0 idna==3.10 importlib_metadata==8.6.1 importlib_resources==6.5.2 jaconv==0.4.0 jamo==0.4.1 jieba==0.42.1 Jinja2==3.1.6 jmespath==0.10.0 joblib==1.4.2 kaldiio==2.18.1 kiwisolver==1.4.7 lazy_loader==0.4 librosa==0.11.0 llvmlite==0.43.0 Markdown==3.8 MarkupSafe==3.0.2 matplotlib==3.9.4 modelscope==1.10.0 mpmath==1.3.0 ms-swift==1.3.0 msgpack==1.1.0 multidict==6.4.3 multiprocess==0.70.14 networkx==3.2.1 nltk==3.9.1 numba==0.60.0 numpy==1.26.4 nvidia-cublas-cu11==11.10.3.66 nvidia-cublas-cu12==12.6.4.1 nvidia-cuda-cupti-cu12==12.6.80 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-nvrtc-cu12==12.6.77 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cuda-runtime-cu12==12.6.77 nvidia-cudnn-cu11==8.5.0.96 nvidia-cudnn-cu12==9.5.1.17 nvidia-cufft-cu12==11.3.0.4 nvidia-cufile-cu12==1.11.1.6 nvidia-curand-cu12==10.3.7.77 nvidia-cusolver-cu12==11.7.1.2 nvidia-cusparse-cu12==12.5.4.2 nvidia-cusparselt-cu12==0.6.3 nvidia-nccl-cu12==2.26.2 nvidia-nvjitlink-cu12==12.6.85 nvidia-nvtx-cu12==12.6.77 oss2==2.19.1 packaging==25.0 pandas==1.3.5 peft==0.15.2 pillow==11.2.1 platformdirs==4.3.7 pooch==1.8.2 propcache==0.3.1 protobuf==6.30.2 psutil==7.0.0 pyarrow==19.0.1 pyarrow-hotfix==0.6 pycparser==2.22 pycryptodome==3.22.0 pyparsing==3.2.3 python-dateutil==2.9.0.post0 pytorch-wpe==0.0.1 pytz==2025.2 PyYAML==6.0.2 regex==2024.11.6 requests==2.32.3 rouge==1.0.1 safetensors==0.5.3 scikit-learn==1.6.1 scipy==1.11.4 sentencepiece==0.2.0 simplejson==3.20.1 six==1.17.0 sortedcontainers==2.4.0 soundfile==0.13.1 soxr==0.5.0.post1 sympy==1.13.3 tensorboard==2.19.0 tensorboard-data-server==0.7.2 threadpoolctl==3.6.0 tokenizers==0.21.1 tomli==2.2.1 torch==1.13.0 torch-complex==0.4.4 torchaudio==0.13.0 tqdm==4.67.1 transformers==4.51.3 transformers-stream-generator==0.0.5 triton==3.3.0 typing_extensions==4.13.2 tzdata==2025.2 umap==0.1.1 urllib3==2.4.0 Werkzeug==3.1.3 xxhash==3.5.0 yapf==0.43.0 yarl==1.20.0 zipp==3.21.0