silero-vad icon indicating copy to clipboard operation
silero-vad copied to clipboard

Bug report - [Onnx Runtime 1.16 incompatible]

Open sorgfresser opened this issue 1 year ago • 2 comments

🐛 Bug

Onnxruntime version 1.16 has been released yesterday. If I use it to load silero-vad using onnx=True, i get

ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Oddly enough, it works if I downgrade to 1.15 even if it's telling me this has been a thing since ORT 1.9.

To Reproduce

Steps to reproduce the behavior:

pip install onnxruntime==1.16.0

    model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad',
                                  model="silero_vad",
                                  onnx=True,
                                  force_reload=False)

Full stack trace:

  File "/home/simon/PycharmProjects/ttsdata/src/vad.py", line 127, in yield_audio
    model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad',
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/torch/hub.py", line 558, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/torch/hub.py", line 587, in _load_local
    model = entry(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/.cache/torch/hub/snakers4_silero-vad_master/hubconf.py", line 44, in silero_vad
    model = OnnxWrapper(os.path.join(model_dir, 'silero_vad.onnx'), force_onnx_cpu)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/.cache/torch/hub/snakers4_silero-vad_master/utils_vad.py", line 24, in __init__
    self.session = onnxruntime.InferenceSession(path, sess_options=opts)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__
    raise e
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session
    raise ValueError(
ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Expected behavior

Environment

Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A

OS: Manjaro Linux (x86_64) GCC version: (GCC) 13.2.1 20230801 Clang version: 16.0.6 CMake version: version 3.27.5 Libc version: glibc-2.38

Python version: 3.11.5 (main, Aug 28 2023, 20:02:58) [GCC 13.2.1 20230801] (64-bit runtime) Python platform: Linux-6.1.51-1-MANJARO-x86_64-with-glibc2.38 Is CUDA available: True CUDA runtime version: 12.2.91 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB Nvidia driver version: 535.104.05 cuDNN version: Probably one of the following: /usr/lib/libcudnn.so.8.9.2 /usr/lib/libcudnn_adv_infer.so.8.9.2 /usr/lib/libcudnn_adv_train.so.8.9.2 /usr/lib/libcudnn_cnn_infer.so.8.9.2 /usr/lib/libcudnn_cnn_train.so.8.9.2 /usr/lib/libcudnn_ops_infer.so.8.9.2 /usr/lib/libcudnn_ops_train.so.8.9.2 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architektur: x86_64 CPU Operationsmodus: 32-bit, 64-bit Adressgrößen: 48 bits physical, 48 bits virtual Byte-Reihenfolge: Little Endian CPU(s): 8 Liste der Online-CPU(s): 0-7 Anbieterkennung: AuthenticAMD Modellname: AMD FX(tm)-8350 Eight-Core Processor Prozessorfamilie: 21 Modell: 2 Thread(s) pro Kern: 2 Kern(e) pro Sockel: 4 Sockel: 1 Stepping: 0 Übertaktung: aktiviert Skalierung der CPU(s): 69% Maximale Taktfrequenz der CPU: 4000,0000 Minimale Taktfrequenz der CPU: 1400,0000 BogoMIPS: 8002,06 Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold Virtualisierung: AMD-V L1d Cache: 128 KiB (8 Instanzen) L1i Cache: 256 KiB (4 Instanzen) L2 Cache: 8 MiB (4 Instanzen) L3 Cache: 8 MiB (1 Instanz) NUMA-Knoten: 1 NUMA-Knoten0 CPU(s): 0-7

Versions of relevant libraries: [pip3] numpy==1.25.2 [pip3] pytorch-lightning==2.0.9 [pip3] pytorch-metric-learning==2.3.0 [pip3] torch==2.0.1 [pip3] torch-audiomentations==0.11.0 [pip3] torch-pitch-shift==1.2.4 [pip3] torchaudio==2.0.2 [pip3] torchmetrics==1.1.2 [pip3] triton==2.0.0 [conda] Could not collect

sorgfresser avatar Sep 21 '23 19:09 sorgfresser

To be solved with a V5 release, most likely just by using latest ONNX compatibility level when exporting the model.

snakers4 avatar Dec 05 '23 08:12 snakers4

This is probably related to a bug in onnxruntime 1.16.0 which they fixed in 1.16.1. I'm using VAD with 1.16.1 without an issue.

ozancaglayan avatar Dec 20 '23 09:12 ozancaglayan

The new VAD version was released just now - https://github.com/snakers4/silero-vad/issues/2#issuecomment-2195433115

Can you please re-run your and tests and if the issue persists - please open a new issue

The VAD was exported with the latest ONNX opset

snakers4 avatar Jun 27 '24 18:06 snakers4