bonnet
bonnet copied to clipboard
Issue in cnn_use_pb_tensorRT.py
After running the cnn_freeze script and getting access to the /tmp/frozen_model forder , I want to run the cnn_use_pb_tensorRT.py script but I got this error message: what is going wrong here?
[TensorRT] ERROR: UFFParser: Validator error: test_model/model/decoder/upsample/unpool3/inv-res-3/inverted_residual/conv/out/LeakyRelu: Unsupported operation _LeakyRelu [TensorRT] ERROR: Failed to parse UFF model stream File "/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py", line 255, in uff_to_trt_engine assert(parser.parse_buffer(stream, 0, network, model_datatype)) Traceback (most recent call last): File "/home/pedram/Desktop/bonnet-master/cnn_use_pb_tensorRT.py", line 251, in
DATA_TYPE) # .HALF for fp16 in jetson! File "/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py", line 263, in uff_to_trt_engine raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text)) AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse_buffer(stream, 0, network, model_datatype))
Any ideas what is going wrong here?
P.S: I am using: Ubuntu 16.04 GPU: Nvidia 1050ti Nvidia driver version: 384.130 Cuda: 9.0 Cudnn: 7 Python: 2.7 Tensroflow version: 1.13.0rc TensorRT version: 5.0.2.6
According to this link, under your configuration, you should probably work with cuda 10.0.130 and TF 1.12.0-rc2. Unfortunately, as a single developer, I can't follow all the spanning trees of possibilities of versions, that's why the docker image is provided. I would downgrade to TF 1.12 and then try again! Let me know if this works
Consider that tensorRT development is always steps behind tensorflow in terms of support. So if the gain in speed is needed, some operator/compatibility support loss is to be expected. I'm still expecting quite a few implementations in tensorRT to be able to implement more complicated models.