TurboTransformers icon indicating copy to clipboard operation
TurboTransformers copied to clipboard

ONNXRT can not be applied in Albert

Open feifeibear opened this issue 4 years ago • 4 comments

/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:738: UserWarning: ONNX export failed on ATen operator einsum because torch.onnx.symbolic_opset9.einsum does not exist .format(op_name, opset_version, op_name)) multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/workspace/benchmark/benchmark_helper.py", line 89, in generate_onnx_model torch.onnx.export(model=model, args=(input_ids, ), f=outf) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/init.py", line 168, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 69, in export use_external_data_format=use_external_data_format) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 488, in _export fixed_batch_size=fixed_batch_size) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 351, in _model_to_graph fixed_batch_size=fixed_batch_size, params_dict=params_dict) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 154, in _optimize_graph graph = torch._C._jit_pass_onnx(graph, operator_export_type) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/init.py", line 199, in _run_symbolic_function return utils._run_symbolic_function(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 739, in _run_symbolic_function op_fn = sym_registry.get_registered_op(op_name, '', opset_version) File "/opt/conda/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py", line 109, in get_registered_op raise RuntimeError(msg) RuntimeError: Exporting the operator einsum to ONNX opset version 9 is not supported. Support for this operator was added in version 12, try exporting with this version. """

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "cpu_benchmark.py", line 173, in main() File "cpu_benchmark.py", line 164, in main benchmark_helper.onnxruntime_benchmark_creator('CPU')(**kwargs) File "/workspace/benchmark/benchmark_helper.py", line 106, in impl backend)) File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 261, in apply return self.apply_async(func, args, kwds).get() File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 657, in get raise self._value RuntimeError: Exporting the operator einsum to ONNX opset version 9 is not supported. Support for this operator was added in version 12, try exporting with this version.

feifeibear avatar Jul 10 '20 03:07 feifeibear

How to fix it? implement a sum by hand or update onnx?

lsy641 avatar Jul 21 '20 02:07 lsy641

I want to compare Turbo with ONNXRT on ALBERT. However, I found some ops in PyTorch does not support ONNX.

feifeibear avatar Jul 22 '20 08:07 feifeibear

The issue was resolved in latest pytorch. Please make sure to use ONNX opset 12 when exporting: https://github.com/pytorch/pytorch/issues/26893

yufenglee avatar Jul 23 '20 23:07 yufenglee

I have noticed this issue and left a comment on the PyTorch issue.

feifeibear avatar Jul 24 '20 05:07 feifeibear