No importer registered for op: NonZero
When I'm trying to import model from ONNX file, I'm getting:
ERROR: TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: NonZero
How NonZero can be replaced or workarounded?
Same problem here when trying to run inference using:
sudo docker run --gpus '"device=0"' --rm -p8000:8000 --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --net inference_network --network-alias=trt_server -v/home/fperez/dev/models/tensorrt:/models nvcr.io/nvidia/tensorrt:20.03-py3 giexec --onnx=/models/bert-onnx/test/model.onnx --device=0 --verbose
Error:
[W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/07/2020-09:32:17] [04/07/2020-09:32:17] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 16) [Constant] for ONNX node:
[04/07/2020-09:32:17] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 218 for ONNX tensor: 218
[04/07/2020-09:32:17] [V] [TRT] ModelImporter.cpp:180: [ConstantOfShape] outputs: [218 -> (-1)],
[04/07/2020-09:32:17] [V] [TRT] ModelImporter.cpp:107: Parsing node: [NonZero]
[04/07/2020-09:32:17] [V] [TRT] ModelImporter.cpp:123: Searching for input: 218
While parsing node number 15 [NonZero -> "219"]:
[04/07/2020-09:32:17] [V] [TRT] ModelImporter.cpp:129: [NonZero] inputs: [218 -> (-1)],
--- Begin node ---
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: NonZero
input: "218"
output: "219"
op_type: "NonZero"
--- End node ---
[E] Failed to parse onnx file
[04/07/2020-09:32:17] [04/07/2020-09:32:17] [04/07/2020-09:32:17] [04/07/2020-09:32:17] &&&& FAILED TensorRT.trtexec # /opt/tensorrt/bin/trtexec --onnx=/models/bert-onnx/test/model.onnx --device=0 --verbose
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed
In the onnx graph:
...
%216 : Long() = onnx::Gather[axis=0](%214, %215) # /Users/fperez/dev/transformers/src/transformers/modeling_bert.py:175:0
%217 : Tensor = onnx::Unsqueeze[axes=[0]](%216)
%218 : Tensor = onnx::ConstantOfShape[value={1}](%217)
%219 : Tensor = onnx::NonZero(%218)
%220 : Tensor = onnx::Transpose[perm=[1, 0]](%219)
%221 : Tensor = onnx::Squeeze[axes=[1]](%220)
...
Are there plans to support NonZero?
+, the same problem. Need NonZero support.
Really need this operation! Any progress with this issue?
Up? For now no one model from TF OD API can be loaded because of this node.
Really need this operation!
This is a problem for me as well.
This occurs in TensorFlow when using tf.where with only the condition argument specified (i.e. x=None and y=None). Even though Where is a supported op, onnx replaces tf.where() with NonZero when only the condition argument is used in tf.where.
Here is a reproducible example for building an onnx model:
import tensorflow as tf
import tf2onnx
with tf.Session() as sess:
# Build model
x = tf.convert_to_tensor([1, 0, 0, 1], name='input')
x = tf.where(x, name='output') # problematic layer
# Create graph def
graph_def = tf.get_default_graph().as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ["output"])
# Convert to onnx and export
with tf.Graph().as_default() as graph:
tf.import_graph_def(output_graph_def, name="")
onnx_graph = tf2onnx.tfonnx.process_tf_graph(graph, opset=11, input_names=["input:0"], output_names=["output:0"])
model_proto = onnx_graph.make_model("sample")
with open("sample.onnx", "wb") as f:
f.write(model_proto.SerializeToString())
The error can be found using trtexec or a onnx2trt (i.e. trtexec --onnx=sample.onnx).
Really need this operation, too!
Same issue. Wonder are there any practical reasons that this OP should not be included in TensorRT?
Same issue. onnx model generate under enviroment config as below: TRT 7.2.1 onnx 1.6 opset 11 pytroch 1.5 torchvision 0.6
any update would be appericated.
We currently do not support the NonZero operator, which is why you are seeing this error. We have plans to support this in a future release.
Hi @kevinch-nv,
I need this operator as well, but since TensorRT needs fixed-size ops, how will you do it?

@kevinch-nv Is there update with this?
What about x > 0 or x < 0?
Is there an alternative way to achieve the same result as this op?
Stuck with the same problem...
Hi everyone, I am facing the same problem with my model. I am in the situation described by @gabrielibagon, the tf.where operators are converted to NonZero operators by ONNX. Have you find some ideas to work around the issue ?
On my side, i have tried something with the ONNX API in my python code. I iterate over the nodes of the graph and if the node belong to the NonZero type i put it back to Where operator which is supported by TensorRT:
for node in onnxModel.graph.node:
if "NonZero" in node.op_type:
node.op_type = "Where"
Then i try to re-export to TensorRT, I have a new error: "Invalid Node: <node_name>" "vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)"
I assume the format of the input is a problem for the Where Operator. It's only a possibility to go further with. Maybe it's possible to add One intermediate operator like "Equal" to obtain a "good" input for the condition field.
Another way, is to develop a plugin, but from the examples i saw on the web, the object IPluginCreator is developed in C++ and then the python API is used to create plugin from it. @kevinch-nv Do you know if it is possible to do everything with Python API ? Like this :
import tensorrt as trt
import numpy as np
# Extends the IPluginCreator class
class NonZeroCreator(trt.IPluginCreator):
# Constructor
def __init__(self, tensorrt_version=7, name="", plugin_version=1, field_names=[], plugin_namespace=""):
self._tensorrt_version = tensorrt_version
self._name = name
self._plugin_version = plugin_version
self._field_names = field_names
self._plugin_namespace = plugin_namespace
# Return a NonZero Plugin
def get_nonzero_plugin():
# Instanciate a NonZeroCreator
custom_plugin_creator = NonZeroCreator(name="NonZero", field_names=["X", "Y"])
input_x_field = trt.PluginField("X", np.array([], dtype=np.float32), trt.PluginFieldType.FLOAT32)
output_y_field = trt.PluginField("Y", np.array([], dtype=np.float32), trt.PluginFieldType.FLOAT32)
custom_field_collection = trt.PluginFieldCollection([input_x_field, output_y_field])
plugin = custom_plugin_creator.create_plugin(name="NonZero", field_collection=custom_field_collection)
return plugin
custom_plugin = get_nonzero_plugin()
If you have any other suggestion, feel free to answer and share what you have tried to do.
Thanks in advance for your help.
+1 Same problem. Is there any workaround?
+1 Waiting for the workaround... Or only we can change the network architecture?
+1 Looking forward for the feature
I believe that trying to get TensorRT to use a plugin that implements NonZero exactly as described in the ONNX specification (here) is impossible with current TensorRT, because the shape of the output tensor is dependent on the input data (specifically how many non-zero values are passed in) and I believe none of the plugin base classes support this kind of dynamism right now.
Is there any solutions that NonZero can be replaced or workarounded? Help!
Any updates to NonZero operation?
can NonZero problem be solved soon ?
Need this as well.
A workaround that worked for me was to change the source code of the model I was exporting. In my case, tf.where was causing the ONNX NonZero operation. This function can be called two ways, either with one argument or three arguments. The one argument version results in an ONNX NonZero operation, but the three argument version does not. Although depending on your framework and model this might entail substantial refactoring.
https://www.tensorflow.org/api_docs/python/tf/where
I need this too. Trying to make a workaround for TensorRT to accept uint8 input.
I need this too ..
+1 , need this operation too
We currently do not support the NonZero operator, which is why you are seeing this error. We have plans to support this in a future release.
@kevinch-nv could you please release a support for NonZero operation soon.
+1, would be really nice to have! :)