tensorflow-onnx
tensorflow-onnx copied to clipboard
Support RaggedGather operator when rank > 1
Describe the bug Now, tf2onnx only supports RaggedGather op rank == 1. But there are some NLP models have rank > 1 scenarios.
System information
- OS Platform and Distribution: Ubuntu 20.04
- Tensorflow Version: 2.7.0
- Python version: 3.8.8
To Reproduce Add below test code in tensorflow-onnx/blob/master/tests/test_backend.py.
# for rank == 2
@check_tf_min_version("1.14", "ragged needs tf 1.14")
@check_opset_min_version(11, "CumSum")
def test_ragged_gather_rank2(self):
splits_val = np.array([0, 3, 3, 5, 9, 10], dtype=np.int32)
dense_vals_val = np.array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19], dtype=np.float32)
indices_val = np.array([1, 2, 0, 1, 1, 2], dtype=np.int32)
def func(splits, rt_dense_values, indices):
x = tf.RaggedTensor.from_row_splits(
tf.RaggedTensor.from_row_splits(
values=rt_dense_values,
row_splits=splits),
row_splits=[0, 1, 1, 5])
g = tf.gather(x, indices)
rt_nested_splits = tf.identity(g.row_splits, name=_TFOUTPUT)
rt_dense_values = tf.identity(g.flat_values, name=_TFOUTPUT1)
return rt_nested_splits, rt_dense_values
self._run_test_case(func, [_OUTPUT, _OUTPUT1],
{_INPUT: splits_val, _INPUT1: dense_vals_val, _INPUT2: indices_val})
and run command:
python tensorflow-onnx/tests/test_backend.py BackendTests.test_ragged_gather_rank2 --opset 14
Error massage
File "/tensorflow-onnx/tf2onnx/tfonnx.py", line 292, in tensorflow_onnx_mapping func(g, node, **kwargs, initialized_tables=initialized_tables, dequantize=dequantize) File "/tensorflow-onnx/tf2onnx/onnx_opset/tensor.py", line 2663, in version_11 utils.make_sure(inp_ragged_rank == 1 and out_ragged_rank == 1 and len(params_nested_splits) == 1, err_msg) File "//tensorflow-onnx/tf2onnx/utils.py", line 264, in make_sure raise ValueError("make_sure failure: " + error_msg % args) ValueError: make_sure failure: RaggedGather conversion only supports ragged rank of 1
Someone posted a request https://github.com/onnx/onnx/issues/4057 to ONNX community.
Closed. Cause this issue was fixed by changing the original model and was traced in ONNX community.