yolov3-tiny-onnx-TensorRT icon indicating copy to clipboard operation
yolov3-tiny-onnx-TensorRT copied to clipboard

Error with new node ‘Upsample’

Open KaggleAlbertaAI opened this issue 6 years ago • 2 comments

Hi, it looks like you have done some work with new node upsample in code, but after convert I got error about upsample below. Have you met problem like this? Thanks look forward your reply!

Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. graph YOLOv3-tiny-416 ( %000_net[FLOAT, 1x3x416x416] ) initializers ( %001_convolutional_bn_scale[FLOAT, 16] %001_convolutional_bn_bias[FLOAT, 16] %001_convolutional_bn_mean[FLOAT, 16] %001_convolutional_bn_var[FLOAT, 16] %001_convolutional_conv_weights[FLOAT, 16x3x3x3] %003_convolutional_bn_scale[FLOAT, 32] %003_convolutional_bn_bias[FLOAT, 32] %003_convolutional_bn_mean[FLOAT, 32] %003_convolutional_bn_var[FLOAT, 32] %003_convolutional_conv_weights[FLOAT, 32x16x3x3] %005_convolutional_bn_scale[FLOAT, 64] %005_convolutional_bn_bias[FLOAT, 64] %005_convolutional_bn_mean[FLOAT, 64] %005_convolutional_bn_var[FLOAT, 64] %005_convolutional_conv_weights[FLOAT, 64x32x3x3] %007_convolutional_bn_scale[FLOAT, 128] %007_convolutional_bn_bias[FLOAT, 128] %007_convolutional_bn_mean[FLOAT, 128] %007_convolutional_bn_var[FLOAT, 128] %007_convolutional_conv_weights[FLOAT, 128x64x3x3] %009_convolutional_bn_scale[FLOAT, 256] %009_convolutional_bn_bias[FLOAT, 256] %009_convolutional_bn_mean[FLOAT, 256] %009_convolutional_bn_var[FLOAT, 256] %009_convolutional_conv_weights[FLOAT, 256x128x3x3] %011_convolutional_bn_scale[FLOAT, 512] %011_convolutional_bn_bias[FLOAT, 512] %011_convolutional_bn_mean[FLOAT, 512] %011_convolutional_bn_var[FLOAT, 512] %011_convolutional_conv_weights[FLOAT, 512x256x3x3] %013_convolutional_bn_scale[FLOAT, 1024] %013_convolutional_bn_bias[FLOAT, 1024] %013_convolutional_bn_mean[FLOAT, 1024] %013_convolutional_bn_var[FLOAT, 1024] %013_convolutional_conv_weights[FLOAT, 1024x512x3x3] %014_convolutional_bn_scale[FLOAT, 256] %014_convolutional_bn_bias[FLOAT, 256] %014_convolutional_bn_mean[FLOAT, 256] %014_convolutional_bn_var[FLOAT, 256] %014_convolutional_conv_weights[FLOAT, 256x1024x1x1] %015_convolutional_bn_scale[FLOAT, 512] %015_convolutional_bn_bias[FLOAT, 512] %015_convolutional_bn_mean[FLOAT, 512] %015_convolutional_bn_var[FLOAT, 512] %015_convolutional_conv_weights[FLOAT, 512x256x3x3] %016_convolutional_conv_bias[FLOAT, 21] %016_convolutional_conv_weights[FLOAT, 21x512x1x1] %019_convolutional_bn_scale[FLOAT, 128] %019_convolutional_bn_bias[FLOAT, 128] %019_convolutional_bn_mean[FLOAT, 128] %019_convolutional_bn_var[FLOAT, 128] %019_convolutional_conv_weights[FLOAT, 128x256x1x1] %020_upsample_scale[FLOAT, 4] %022_convolutional_bn_scale[FLOAT, 256] %022_convolutional_bn_bias[FLOAT, 256] %022_convolutional_bn_mean[FLOAT, 256] %022_convolutional_bn_var[FLOAT, 256] %022_convolutional_conv_weights[FLOAT, 256x384x3x3] %023_convolutional_conv_bias[FLOAT, 21] %023_convolutional_conv_weights[FLOAT, 21x256x1x1] ) { %001_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%000_net, %001_convolutional_conv_weights) %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var) %001_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %002_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2] %003_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%002_maxpool, %003_convolutional_conv_weights) %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var) %003_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %004_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2] %005_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%004_maxpool, %005_convolutional_conv_weights) %005_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%005_convolutional, %005_convolutional_bn_scale, %005_convolutional_bn_bias, %005_convolutional_bn_mean, %005_convolutional_bn_var) %005_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %006_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2] %007_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%006_maxpool, %007_convolutional_conv_weights) %007_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%007_convolutional, %007_convolutional_bn_scale, %007_convolutional_bn_bias, %007_convolutional_bn_mean, %007_convolutional_bn_var) %007_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %008_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2] %009_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%008_maxpool, %009_convolutional_conv_weights) %009_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%009_convolutional, %009_convolutional_bn_scale, %009_convolutional_bn_bias, %009_convolutional_bn_mean, %009_convolutional_bn_var) %009_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %010_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2] %011_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%010_maxpool, %011_convolutional_conv_weights) %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var) %011_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %012_maxpool = MaxPoolauto_pad = u'SAME_UPPER', kernel_shape = [2, 2], strides = [1, 1] %013_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%012_maxpool, %013_convolutional_conv_weights) %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var) %013_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %014_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights) %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var) %014_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %015_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%014_convolutional_lrelu, %015_convolutional_conv_weights) %015_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%015_convolutional, %015_convolutional_bn_scale, %015_convolutional_bn_bias, %015_convolutional_bn_mean, %015_convolutional_bn_var) %015_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %016_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%015_convolutional_lrelu, %016_convolutional_conv_weights, %016_convolutional_conv_bias) %019_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%014_convolutional_lrelu, %019_convolutional_conv_weights) %019_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%019_convolutional, %019_convolutional_bn_scale, %019_convolutional_bn_bias, %019_convolutional_bn_mean, %019_convolutional_bn_var) %019_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %020_upsample = Upsample[mode = u'nearest'](%019_convolutional_lrelu, %020_upsample_scale) %021_route = Concat[axis = 1](%020_upsample, %009_convolutional_lrelu) %022_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%021_route, %022_convolutional_conv_weights) %022_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%022_convolutional, %022_convolutional_bn_scale, %022_convolutional_bn_bias, %022_convolutional_bn_mean, %022_convolutional_bn_var) %022_convolutional_lrelu = LeakyRelualpha = 0.100000001490116 %023_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%022_convolutional_lrelu, %023_convolutional_conv_weights, %023_convolutional_conv_bias) return %016_convolutional, %023_convolutional } Traceback (most recent call last): File "/home/jxiao/github/yolov3-tiny-onnx-TensorRT-master/yolov3_to_onnx.py", line 830, in main() File "/home/jxiao/github/yolov3-tiny-onnx-TensorRT-master/yolov3_to_onnx.py", line 823, in main onnx.checker.check_model(yolov3_model_def) File "/root/anaconda3/envs/py2/lib/python2.7/site-packages/onnx/checker.py", line 82, in check_model C.check_model(model.SerializeToString()) onnx.onnx_cpp2py_export.checker.ValidationError: Input size 2 not in range [min=1, max=1].

==> Context: Bad node spec: input: "019_convolutional_lrelu" input: "020_upsample_scale" output: "020_upsample" name: "020_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

Process finished with exit code 1

KaggleAlbertaAI avatar Dec 02 '19 02:12 KaggleAlbertaAI

ok, I fixed it by reinstall my 'onnx' from 1.2.1 to .1.4.1

KaggleAlbertaAI avatar Dec 02 '19 02:12 KaggleAlbertaAI

Maybe open a PR for fixing this in the requirements.txt

fwarmuth avatar May 04 '20 06:05 fwarmuth