onnx-tensorrt icon indicating copy to clipboard operation
onnx-tensorrt copied to clipboard

ConstantOfShape parse error

Open jlamperez opened this issue 2 years ago • 1 comments

Hi,

I am trying to build an Engine file with the next command:

trtexec --onnx=model.onnx --saveEngine=model.plan

But I am not able to parse it I get this error

[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:742: input: "onnx::ConstantOfShape_205"
output: "onnx::Add_206"
name: "ConstantOfShape_27"
op_type: "ConstantOfShape"
attribute {
  name: "value"
  t {
    dims: 1
    data_type: 7
    name: ""
    raw_data: "\000\000\000\000\000\000\000\000"
  }
  type: TENSOR
}
domain: "ConstantOfShape_27"

[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:743: --- End node ---
[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:745: ERROR: parsers/onnx/ModelImporter.cpp:199 In function parseGraph:
[6] Invalid Node - ConstantOfShape_27

Here is the complete log error:

Error
trtexec --onnx=model.onnx --saveEngine=model.plan
&&&& RUNNING TensorRT.trtexec [TensorRT v8500] # trtexec --onnx=model.onnx --saveEngine=model.plan
[01/06/2023-21:11:12] [I] === Model Options ===
[01/06/2023-21:11:12] [I] Format: ONNX
[01/06/2023-21:11:12] [I] Model: model.onnx
[01/06/2023-21:11:12] [I] Output:
[01/06/2023-21:11:12] [I] === Build Options ===
[01/06/2023-21:11:12] [I] Max batch: explicit batch
[01/06/2023-21:11:12] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[01/06/2023-21:11:12] [I] minTiming: 1
[01/06/2023-21:11:12] [I] avgTiming: 8
[01/06/2023-21:11:12] [I] Precision: FP32
[01/06/2023-21:11:12] [I] LayerPrecisions: 
[01/06/2023-21:11:12] [I] Calibration: 
[01/06/2023-21:11:12] [I] Refit: Disabled
[01/06/2023-21:11:12] [I] Sparsity: Disabled
[01/06/2023-21:11:12] [I] Safe mode: Disabled
[01/06/2023-21:11:12] [I] DirectIO mode: Disabled
[01/06/2023-21:11:12] [I] Restricted mode: Disabled
[01/06/2023-21:11:12] [I] Build only: Disabled
[01/06/2023-21:11:12] [I] Save engine: model.plan
[01/06/2023-21:11:12] [I] Load engine: 
[01/06/2023-21:11:12] [I] Profiling verbosity: 0
[01/06/2023-21:11:12] [I] Tactic sources: Using default tactic sources
[01/06/2023-21:11:12] [I] timingCacheMode: local
[01/06/2023-21:11:12] [I] timingCacheFile: 
[01/06/2023-21:11:12] [I] Heuristic: Disabled
[01/06/2023-21:11:12] [I] Preview Features: Use default preview flags.
[01/06/2023-21:11:12] [I] Input(s)s format: fp32:CHW
[01/06/2023-21:11:12] [I] Output(s)s format: fp32:CHW
[01/06/2023-21:11:12] [I] Input build shapes: model
[01/06/2023-21:11:12] [I] Input calibration shapes: model
[01/06/2023-21:11:12] [I] === System Options ===
[01/06/2023-21:11:12] [I] Device: 0
[01/06/2023-21:11:12] [I] DLACore: 
[01/06/2023-21:11:12] [I] Plugins:
[01/06/2023-21:11:12] [I] === Inference Options ===
[01/06/2023-21:11:12] [I] Batch: Explicit
[01/06/2023-21:11:12] [I] Input inference shapes: model
[01/06/2023-21:11:12] [I] Iterations: 10
[01/06/2023-21:11:12] [I] Duration: 3s (+ 200ms warm up)
[01/06/2023-21:11:12] [I] Sleep time: 0ms
[01/06/2023-21:11:12] [I] Idle time: 0ms
[01/06/2023-21:11:12] [I] Streams: 1
[01/06/2023-21:11:12] [I] ExposeDMA: Disabled
[01/06/2023-21:11:12] [I] Data transfers: Enabled
[01/06/2023-21:11:12] [I] Spin-wait: Disabled
[01/06/2023-21:11:12] [I] Multithreading: Disabled
[01/06/2023-21:11:12] [I] CUDA Graph: Disabled
[01/06/2023-21:11:12] [I] Separate profiling: Disabled
[01/06/2023-21:11:12] [I] Time Deserialize: Disabled
[01/06/2023-21:11:12] [I] Time Refit: Disabled
[01/06/2023-21:11:12] [I] NVTX verbosity: 0
[01/06/2023-21:11:12] [I] Persistent Cache Ratio: 0
[01/06/2023-21:11:12] [I] Inputs:
[01/06/2023-21:11:12] [I] === Reporting Options ===
[01/06/2023-21:11:12] [I] Verbose: Disabled
[01/06/2023-21:11:12] [I] Averages: 10 inferences
[01/06/2023-21:11:12] [I] Percentiles: 90,95,99
[01/06/2023-21:11:12] [I] Dump refittable layers:Disabled
[01/06/2023-21:11:12] [I] Dump output: Disabled
[01/06/2023-21:11:12] [I] Profile: Disabled
[01/06/2023-21:11:12] [I] Export timing to JSON file: 
[01/06/2023-21:11:12] [I] Export output to JSON file: 
[01/06/2023-21:11:12] [I] Export profile to JSON file: 
[01/06/2023-21:11:12] [I] 
[01/06/2023-21:11:12] [I] === Device Information ===
[01/06/2023-21:11:12] [I] Selected Device: NVIDIA GeForce RTX 2080 Ti
[01/06/2023-21:11:12] [I] Compute Capability: 7.5
[01/06/2023-21:11:12] [I] SMs: 68
[01/06/2023-21:11:12] [I] Compute Clock Rate: 1.545 GHz
[01/06/2023-21:11:12] [I] Device Global Memory: 11016 MiB
[01/06/2023-21:11:12] [I] Shared Memory per SM: 64 KiB
[01/06/2023-21:11:12] [I] Memory Bus Width: 352 bits (ECC disabled)
[01/06/2023-21:11:12] [I] Memory Clock Rate: 7 GHz
[01/06/2023-21:11:12] [I] 
[01/06/2023-21:11:12] [I] TensorRT version: 8.5.0
[01/06/2023-21:11:12] [I] [TRT] [MemUsageChange] Init CUDA: CPU +304, GPU +0, now: CPU 317, GPU 1161 (MiB)
[01/06/2023-21:11:14] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +260, GPU +74, now: CPU 629, GPU 1235 (MiB)
[01/06/2023-21:11:14] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[01/06/2023-21:11:14] [I] Start parsing network model
[01/06/2023-21:11:14] [I] [TRT] ----------------------------------------------------------------
[01/06/2023-21:11:14] [I] [TRT] Input filename:   model.onnx
[01/06/2023-21:11:14] [I] [TRT] ONNX IR version:  0.0.8
[01/06/2023-21:11:14] [I] [TRT] Opset version:    13
[01/06/2023-21:11:14] [I] [TRT] Producer name:    pytorch
[01/06/2023-21:11:14] [I] [TRT] Producer version: 1.13.0
[01/06/2023-21:11:14] [I] [TRT] Domain:           
[01/06/2023-21:11:14] [I] [TRT] Model version:    0
[01/06/2023-21:11:14] [I] [TRT] Doc string:       
[01/06/2023-21:11:14] [I] [TRT] ----------------------------------------------------------------
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[01/06/2023-21:11:14] [E] Error[2]: [shapeContext.cpp::setShapeInterval::427] Error Code 2: Internal Error (Assertion success failed. intervals already set for the shape)
[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:740: While parsing node number 12 [ConstantOfShape -> "onnx::Add_206"]:
[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:741: --- Begin node ---
[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:742: input: "onnx::ConstantOfShape_205"
output: "onnx::Add_206"
name: "ConstantOfShape_27"
op_type: "ConstantOfShape"
attribute {
  name: "value"
  t {
    dims: 1
    data_type: 7
    name: ""
    raw_data: "\000\000\000\000\000\000\000\000"
  }
  type: TENSOR
}
domain: "ConstantOfShape_27"

[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:743: --- End node ---
[01/06/2023-21:11:14] [E] [TRT] parsers/onnx/ModelImporter.cpp:745: ERROR: parsers/onnx/ModelImporter.cpp:199 In function parseGraph:
[6] Invalid Node - ConstantOfShape_27
[shapeContext.cpp::setShapeInterval::427] Error Code 2: Internal Error (Assertion success failed. intervals already set for the shape)
[01/06/2023-21:11:14] [E] Failed to parse onnx file
[01/06/2023-21:11:14] [I] Finish parsing network model
[01/06/2023-21:11:14] [E] Parsing model failed
[01/06/2023-21:11:14] [E] Failed to create engine from model or file.
[01/06/2023-21:11:14] [E] Engine set up failed

Is not this operator ConstantOfShape supported here?

This are TensorRT libraries that I am using:

ii  libnvinfer-bin                         8.5.0-1+cuda11.8                  amd64        TensorRT binaries
ii  libnvinfer-dev                         8.5.0-1+cuda11.8                  amd64        TensorRT development libraries and headers
ii  libnvinfer-plugin-dev                  8.5.0-1+cuda11.8                  amd64        TensorRT plugin libraries and headers
ii  libnvinfer-plugin8                     8.5.0-1+cuda11.8                  amd64        TensorRT plugin library
ii  libnvinfer8                            8.5.0-1+cuda11.8                  amd64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                   8.5.0-1+cuda11.8                  amd64        TensorRT ONNX libraries
ii  libnvonnxparsers8                      8.5.0-1+cuda11.8                  amd64        TensorRT ONNX libraries
ii  libnvparsers-dev                       8.5.0-1+cuda11.8                  amd64        TensorRT parsers libraries
ii  libnvparsers8                          8.5.0-1+cuda11.8                  amd64        TensorRT parsers libraries
ii  tensorrt-dev                           8.5.0.12-1+cuda11.8               amd64        Meta package for TensorRT development libraries

Thanks!

jlamperez avatar Jan 06 '23 22:01 jlamperez

ConstantOfShape

Is not this operator ConstantOfShape supported?

I understand that is using int64 which is not a supported type for ConstantOfShape.

jlamperez avatar Jan 07 '23 09:01 jlamperez