使用3.0.0版本,fine tune训练aishell paraformer 得到pt模型,执行导出onnx遇到如下错误:
/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/torch/_jit_internal.py:730: FutureWarning: ignore(True) has been deprecated. TorchScript will now drop the function call on compilation. Use torch.jit.unused now. {}
warnings.warn(
Traceback (most recent call last):
File "wenet/bin/export_onnx_cpu.py", line 30, in
import onnx
File "/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/onnx/init.py", line 11, in
from onnx.external_data_helper import load_external_data_for_model, write_external_data_tensors, convert_model_to_external_data
File "/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/onnx/external_data_helper.py", line 14, in
from .onnx_pb import TensorProto, ModelProto
File "/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/onnx/onnx_pb.py", line 8, in
from .onnx_ml_pb2 import * # noqa
File "/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/onnx/onnx_ml_pb2.py", line 33, in
_descriptor.EnumValueDescriptor(
File "/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 789, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
- Downgrade the protobuf package to 3.20.x or lower.
- Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
protobuf版本:4.25.2
设置PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python后:
/home/ubuntu/anaconda3/envs/wenet3/lib/python3.8/site-packages/torch/_jit_internal.py:730: FutureWarning: ignore(True) has been deprecated. TorchScript will now drop the function call on compilation. Use torch.jit.unused now. {}
warnings.warn(
Traceback (most recent call last):
File "wenet/bin/export_onnx_cpu.py", line 470, in
main()
File "wenet/bin/export_onnx_cpu.py", line 426, in main
model, configs = init_model(args, configs)
File "/asr/ubuntu/wenet/boiling/paraformer/wenet/utils/init_model.py", line 141, in init_model
predictor = WENET_PREDICTOR_CLASSES[predictor_type](
TypeError: init() got an unexpected keyword argument 'upsample_type'
请问,当前是否支持最新paraformer的pt模型转onnx?
train.yaml 的predictor里边有up sample这个字段吗
同问, 导出到 TorchScript模型也是同样的报错
用这个:https://github.com/wenet-e2e/wenet/pull/2389