TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Could not find any implementation for node /network.0/network.0.0/norm1/LayerNormalization.

Open demuxin opened this issue 1 year ago • 5 comments

Description

I customized TensorRT's Col2Im plugin, recompiled the source code of TensorRT8.5, and generated a new nvinfer_plugin library. This is the LayerNormalization node information in the model,

image

So I modified the file plugin/layerNormPlugin/layerNormPlugin.cpp,

- static std::string const kLAYER_NORM_PLUGIN_NAME{"LayerNorm"};
+ static std::string const kLAYER_NORM_PLUGIN_NAME{"LayerNormalization"};

When I started building the model engine, an error message appeared:

WARN:  Skipping tactic 0x0000000000000000 due to exception Assertion status == kSTATUS_SUCCESS failed. 
WARN:  Skipping tactic 0x0000000000000000 due to exception Assertion status == kSTATUS_SUCCESS failed. 
ERROR:  10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node /network.0/network.0.0/norm1/LayerNormalization.)
ERROR:  2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Environment

TensorRT Version: 8.5.3.1

NVIDIA GPU: GeForce GTX 1080 Ti

NVIDIA Driver Version: 535.146.02

CUDA Version: 11.8

CUDNN Version: 8.6.0

Operating System: ubuntu18.04

May I ask what is the reason for this problem and how to solve it?

demuxin avatar Jan 18 '24 07:01 demuxin

I have tried opset_version == 16 in my problem and it works.

SunHaoOne avatar Jan 19 '24 08:01 SunHaoOne

Could you please try latest 9.2, IIRC we add support to opset 17 since TRT 8.6.

Download from https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-11.8.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-12.2.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.ubuntu-22.04.aarch64-gnu.cuda-12.2.tar.gz

zerollzeng avatar Jan 19 '24 09:01 zerollzeng

Could you please try latest 9.2, IIRC we add support to opset 17 since TRT 8.6.

Download from https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-11.8.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-12.2.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.ubuntu-22.04.aarch64-gnu.cuda-12.2.tar.gz

Hi @zerollzeng, thank you for your reply, because I need col2im operator, so I have to export onnx model using opset 18.

Is there any solution without changing the TensorRT version and opset?

demuxin avatar Jan 22 '24 03:01 demuxin

Could you please try latest 9.2, IIRC we add support to opset 17 since TRT 8.6. Download from https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-11.8.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-12.2.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.ubuntu-22.04.aarch64-gnu.cuda-12.2.tar.gz

Hi @zerollzeng, thank you for your reply, because I need col2im operator, so I have to export onnx model using opset 18.

Is there any solution without changing the TensorRT version and opset?

layernorm as a single layer was supported by opset >= 17 and tensorrt >= 8.6.1.

So you must use :

  • opset >= 17 and tensorrt >= 8.6.1
  • opset < 17 with layernorm layer setted to use fp32 yourself

tp-nan avatar Jan 22 '24 03:01 tp-nan

I use TensorRT9.2 and opset18, the LayerNormalization problem doesn't occur anymore. But there is an error with the col2im plugin.

This error does not occur on TensorRT8.5, Could you please give me some tips and suggestions.

error: parsers/onnx/ModelImporter.cpp:891: While parsing node number 29 [Col2Im -> "/network.0/network.0.0/attn/Col2Im_output_0"]:
error: parsers/onnx/ModelImporter.cpp:892: --- Begin node ---
input: "/network.0/network.0.0/attn/Reshape_2_output_0"
input: "onnx::Col2Im_386"
input: "onnx::Col2Im_261"
output: "/network.0/network.0.0/attn/Col2Im_output_0"
name: "/network.0/network.0.0/attn/Col2Im"
op_type: "Col2Im"
attribute {
  name: "dilations"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "pads"
  ints: 1
  ints: 1
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "strides"
  ints: 2
  ints: 2
  type: INTS
}

error: parsers/onnx/ModelImporter.cpp:893: --- End node ---
error: parsers/onnx/ModelImporter.cpp:895: ERROR: parsers/onnx/builtin_op_static_checkers.cpp:802 In function checkCol2Im:
[8] false
error: parsers/onnx/ModelImporter.cpp:891: While parsing node number 66 [Col2Im -> "/network.0/network.0.1/attn/Col2Im_output_0"]:
error: parsers/onnx/ModelImporter.cpp:892: --- Begin node ---
input: "/network.0/network.0.1/attn/Reshape_2_output_0"
input: "onnx::Col2Im_386"
input: "onnx::Col2Im_261"
output: "/network.0/network.0.1/attn/Col2Im_output_0"
name: "/network.0/network.0.1/attn/Col2Im"
op_type: "Col2Im"
attribute {
  name: "dilations"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "pads"
  ints: 1
  ints: 1
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "strides"
  ints: 2
  ints: 2
  type: INTS
}

error: parsers/onnx/ModelImporter.cpp:893: --- End node ---
error: parsers/onnx/ModelImporter.cpp:895: ERROR: parsers/onnx/builtin_op_static_checkers.cpp:802 In function checkCol2Im:
[8] false
error: parsers/onnx/ModelImporter.cpp:891: While parsing node number 103 [Col2Im -> "/network.0/network.0.2/attn/Col2Im_output_0"]:
error: parsers/onnx/ModelImporter.cpp:892: --- Begin node ---
input: "/network.0/network.0.2/attn/Reshape_2_output_0"
input: "onnx::Col2Im_386"
input: "onnx::Col2Im_261"
output: "/network.0/network.0.2/attn/Col2Im_output_0"
name: "/network.0/network.0.2/attn/Col2Im"
op_type: "Col2Im"
attribute {
  name: "dilations"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "pads"
  ints: 1
  ints: 1
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "strides"
  ints: 2
  ints: 2
  type: INTS
}

error: parsers/onnx/ModelImporter.cpp:893: --- End node ---
error: parsers/onnx/ModelImporter.cpp:895: ERROR: parsers/onnx/builtin_op_static_checkers.cpp:802 In function checkCol2Im:
[8] false
error: parsers/onnx/ModelImporter.cpp:891: While parsing node number 644 [Add -> "outputs"]:
error: parsers/onnx/ModelImporter.cpp:892: --- Begin node ---
input: "/head/Gemm_output_0"
input: "/Mul_1_output_0"
output: "outputs"
name: "/Add_1"
op_type: "Add"

error: parsers/onnx/ModelImporter.cpp:893: --- End node ---
error: parsers/onnx/ModelImporter.cpp:895: ERROR: parsers/onnx/builtin_op_static_checkers.cpp:802 In function checkCol2Im:
[8] false

This is the Col2Im node information in the model,

image

demuxin avatar Jan 22 '24 06:01 demuxin

@demuxin do you have resolve this problem

Liupei1101 avatar Mar 22 '24 03:03 Liupei1101

unfortunately, no, I've been busy with other things recently.

demuxin avatar Mar 22 '24 06:03 demuxin