onnx-tensorflow
onnx-tensorflow copied to clipboard
GatherAndScatterMixin: TypeError: slice indices must be integers or None or have an __index__ method
Describe the bug
I am trying to convert an ONNX model exported from mmdetection framework but I am getting this error:
2021-07-24 13:24:50.300861: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-24 13:24:52.419083: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-24 13:24:52.463692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:18:00.0 name: GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2021-07-24 13:24:52.464971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:86:00.0 name: GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.70GiB deviceMemoryBandwidth: 871.81GiB/s
2021-07-24 13:24:52.465850: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 2 with properties:
pciBusID: 0000:3b:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2021-07-24 13:24:52.466728: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 3 with properties:
pciBusID: 0000:af:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2021-07-24 13:24:52.466754: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-24 13:24:52.469293: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-24 13:24:52.469352: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-24 13:24:52.470295: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-24 13:24:52.470559: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-24 13:24:52.473078: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-24 13:24:52.473615: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-24 13:24:52.473874: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2021-07-24 13:24:52.473890: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1766] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-07-24 13:24:52.474307: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-24 13:24:52.476749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-24 13:24:52.476822: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]
WARNING:tensorflow:From /opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.
Traceback (most recent call last):
File "onnx_to_tf.py", line 31, in <module>
tf_rep.export_graph("output_path") # export the model
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/backend_rep.py", line 115, in export_graph
signatures=self.tf_module.__call__.get_concrete_function(
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1367, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1273, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 763, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3050, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3279, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3971, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/backend_tf_module.py:98 __call__ *
output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/backend.py:328 _onnx_node_to_tensorflow_op *
return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/handlers/handler.py:59 handle *
return ver_handle(node, **kwargs)
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/handlers/backend/unsqueeze.py:32 version_11 *
return cls._common(node, **kwargs)
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/handlers/backend/scatter_nd.py:23 _common *
indices = cls.process_neg_idx(data, indices)
/opt/miniconda3/envs/model_exporter/lib/python3.8/site-packages/onnx_tf/handlers/backend/gather_and_scatter_mixin.py:77 process_neg_idx *
max_i = tf.cast(data_shape[:indices_shape[-1]], indices.dtype)
TypeError: slice indices must be integers or None or have an __index__ method
I have tried to do a little debugging to check what are the values of some of the variables that reach process_neg_idx
:
data: Tensor("onnx_tf_prefix_ConstantOfShape_2607:0", shape=(4741, 1), dtype=float32)
indices: Tensor("onnx_tf_prefix_Concat_2639:0", shape=(None, None, 2), dtype=int64)
indices_shape: Tensor("Shape_648:0", shape=(3,), dtype=int64)
To Reproduce
I am just running the cli conversion command:
onnx-tf convert -i input.onnx -o tf_model
ONNX model file
https://drive.google.com/file/d/1Isp4_kGe3KJdikT2Q5i5LrcuxSdxlTnP/view?usp=sharing
Python, ONNX, ONNX-TF, Tensorflow version
This section can be obtained by running get_version.py
from util folder.
- Python version: 3.8
- ONNX version: 1.9.0
- ONNX-TF version: 1.8.0
- Tensorflow version: 2.5.0
@mmeendez8 the fix is merged so please verify whether your issue is resolved. thanks!