TurboTransformers
TurboTransformers copied to clipboard
ONNXRT can not be applied in Albert
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:738: UserWarning: ONNX export failed on ATen operator einsum because torch.onnx.symbolic_opset9.einsum does not exist .format(op_name, opset_version, op_name)) multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/workspace/benchmark/benchmark_helper.py", line 89, in generate_onnx_model torch.onnx.export(model=model, args=(input_ids, ), f=outf) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/init.py", line 168, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 69, in export use_external_data_format=use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 488, in _export fixed_batch_size=fixed_batch_size) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 351, in _model_to_graph fixed_batch_size=fixed_batch_size, params_dict=params_dict) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 154, in _optimize_graph graph = torch._C._jit_pass_onnx(graph, operator_export_type) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/init.py", line 199, in _run_symbolic_function return utils._run_symbolic_function(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 739, in _run_symbolic_function op_fn = sym_registry.get_registered_op(op_name, '', opset_version) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py", line 109, in get_registered_op raise RuntimeError(msg) RuntimeError: Exporting the operator einsum to ONNX opset version 9 is not supported. Support for this operator was added in version 12, try exporting with this version. """
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "cpu_benchmark.py", line 173, in
How to fix it? implement a sum by hand or update onnx?
I want to compare Turbo with ONNXRT on ALBERT. However, I found some ops in PyTorch does not support ONNX.
The issue was resolved in latest pytorch. Please make sure to use ONNX opset 12 when exporting: https://github.com/pytorch/pytorch/issues/26893
I have noticed this issue and left a comment on the PyTorch issue.