frigate icon indicating copy to clipboard operation
frigate copied to clipboard

[Support]: Assistance getting tensorrt running

Open bughattiveyron opened this issue 1 year ago • 8 comments

Describe the problem you are having

Trying to understand the guide to get tensorrt setup,
https://deploy-preview-4055--frigate-docs.netlify.app/configuration/detectors/#nvidia-tensorrt-detector

I have run so far

mkdir trt-models
wget https://raw.githubusercontent.com/blakeblackshear/frigate/docker/tensorrt_models.sh
chmod +x tensorrt_models.sh
docker run --gpus=all --rm -it -v `pwd`/volume2/trtmodels:/tensorrt_models -v `pwd`/tensorrt_models.sh:/tensorrt_models.sh nvcr.io/nvidia/tensorrt:22.07-py3 /tensorrt_models.sh

What I am confused about is how to configure the docker run command. This is what I am currently using

docker run -d  --gpus "device=1" --name frigate_tensor   --restart=unless-stopped   --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000   --device /dev/bus/usb:/dev/bus/usb   --shm-size=256m   -v /volume2/frigate/_data:/media/frigate   -v /var/lib/docker/volumes/frigate_config/_data/config.yml:/config/config.yml:ro   -v /etc/localtime:/etc/localtime:ro   -e FRIGATE_RTSP_PASSWORD='secure'   -p 5000:5000   -p 1935:1935   ghcr.io/blakeblackshear/frigate:dev-edbdbb7-tensorrt

Also at the end of generating the models, it spit this out Serialized the TensorRT engine to file: yolov7-tiny-416.trt Is this what I need to set with -e

Version

0.12.0-EDBDBB7

Frigate config file

mqtt:
  host: host
  port: 1883
  topic_prefix: frigate
  client_id: frigate
  user: liquidmosq
  password: secure

birdseye:
  enabled: True
  width: 1920
  height: 1080
  quality: 8
  mode: objects

cameras:
  Front_Garage:
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -ar 44100 -c:a aac
        rtmp: -c:v copy -f flv -ar 44100 -c:a aac
      inputs:
        - path: camera
          roles:
            - detect
            - rtmp
      hwaccel_args: preset-nvidia-h264
    objects:
      track:
        - person
        - car
        - truck
        - bicycle
        - motorcycle
        - dog
        - cat
    snapshots:
      enabled: True
    record:
      enabled: True
      retain:
        days: 90
        mode: motion
      events:
        retain:
          default: 90
          mode: active_objects
    motion:
      mask:
        - 0,151,1280,188,1280,0,0,0
        - 680,440,997,720,594,720,388,720,388,451
  Back_Alley:
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -ar 44100 -c:a aac
        rtmp: -c:v copy -f flv -ar 44100 -c:a aac
      inputs:
        - path: camera
          roles:
            - detect
            - rtmp
      hwaccel_args: preset-nvidia-h264
    objects:
      track:
        - person
        - car
        - truck
        - bicycle
        - motorcycle
        - dog
        - cat
    snapshots:
      enabled: True
    record:
      enabled: True
      retain:
        days: 90
        mode: motion
      events:
        retain:
          default: 90
          mode: active_objects
    motion:
      mask:
        - 804,720,815,518,733,179,480,0,270,0,114,127,0,253,0,720
        - 1109,215,948,186,942,275,1045,420,1186,430,1212,284
        - 708,168,1093,165,1182,152,1159,112,599,90

Relevant log output

2023-01-26 23:07:15.301969183  [2023-01-26 23:07:15] frigate.app                    INFO    : Starting Frigate (0.12.0-edbdbb7)
2023-01-26 23:07:15.332469612  [2023-01-26 23:07:15] peewee_migrate                 INFO    : Starting migrations
2023-01-26 23:07:15.361975295  [2023-01-26 23:07:15] peewee_migrate                 INFO    : There is nothing to migrate
2023-01-26 23:07:15.372612370  [2023-01-26 23:07:15] ws4py                          INFO    : Using epoll
2023-01-26 23:07:15.393944948  [2023-01-26 23:07:15] detector.cpu                   INFO    : Starting detection process: 276
2023-01-26 23:07:15.394132605  [2023-01-26 23:07:15] frigate.detectors              WARNING : CPU detectors are not recommended and should only be used for testing or for trial purposes.
2023-01-26 23:07:15.395090557  [2023-01-26 23:07:15] frigate.app                    INFO    : Output process started: 278
2023-01-26 23:07:15.402318978  [2023-01-26 23:07:15] frigate.app                    INFO    : Camera processor started for Front_Garage: 280
2023-01-26 23:07:15.406954071  [2023-01-26 23:07:15] ws4py                          INFO    : Using epoll
2023-01-26 23:07:15.414546020  [2023-01-26 23:07:15] frigate.app                    INFO    : Camera processor started for Back_Alley: 283
2023-01-26 23:07:15.414985691  [2023-01-26 23:07:15] frigate.app                    INFO    : Capture process started for Front_Garage: 284
2023-01-26 23:07:15.426444265  [2023-01-26 23:07:15] frigate.app                    INFO    : Capture process started for Back_Alley: 288
2023-01-26 23:07:18.769517201  [2023-01-26 23:07:18] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33414]
2023-01-26 23:07:21.118319538  [2023-01-26 23:07:21] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33496]
2023-01-26 23:07:22.725279126  [2023-01-26 23:07:22] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33414]
2023-01-26 23:07:22.924929596  [2023-01-26 23:07:22] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:42436]
2023-01-26 23:07:22.981325871  [2023-01-26 23:07:22] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33512]
2023-01-26 23:07:35.422142903  [2023-01-26 23:07:35] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33496]
2023-01-26 23:07:35.632975092  [2023-01-26 23:07:35] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:50078]
2023-01-26 23:07:35.707798744  [2023-01-26 23:07:35] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:58304]
2023-01-26 23:08:12.328452823  [2023-01-26 23:08:12] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:58304]
2023-01-26 23:08:13.329555926  [2023-01-26 23:08:13] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:36890]
2023-01-26 23:08:14.507939565  [2023-01-26 23:08:14] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33512]
2023-01-26 23:08:15.385035791  [2023-01-26 23:08:15] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:36892]
2023-01-26 23:09:58.741386143  [2023-01-26 23:09:58] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:36890]
2023-01-26 23:09:58.883393180  [2023-01-26 23:09:58] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:37886]
2023-01-26 23:10:11.242707040  [2023-01-26 23:10:11] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:37886]
2023-01-26 23:10:11.387195875  [2023-01-26 23:10:11] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:53804]
2023-01-26 23:10:23.602505028  [2023-01-26 23:10:23] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:53804]
2023-01-26 23:11:00.699716790  [2023-01-26 23:11:00] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:56522]
2023-01-26 23:11:18.025176141  [2023-01-26 23:11:18] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:56522]
2023-01-26 23:11:18.144025838  [2023-01-26 23:11:18] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33658]
2023-01-26 23:11:46.616205763  [2023-01-26 23:11:46] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:33658]
2023-01-27 01:49:42.745289515  [2023-01-27 01:49:42] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:42436]
2023-01-27 01:49:42.745298156  [2023-01-27 01:49:42] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:50078]
2023-01-27 07:37:06.676913878  [2023-01-27 07:37:06] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:55122]
2023-01-27 07:37:07.835753205  [2023-01-27 07:37:07] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:55122]
2023-01-27 16:16:26.326150316  [2023-01-27 16:16:26] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:36892]

FFprobe output from your camera

"[\n  {\n    \"return_code\": 0,\n    \"stderr\": {},\n    \"stdout\": {\n      \"programs\": [],\n      \"streams\": [\n        {\n          \"avg_frame_rate\": \"0/0\",\n          \"codec_long_name\": \"AAC (Advanced Audio Coding)\"\n        },\n        {\n          \"avg_frame_rate\": \"25/1\",\n          \"codec_long_name\": \"H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10\",\n          \"display_aspect_ratio\": \"16:9\",\n          \"height\": 1080,\n          \"width\": 1920\n        }\n      ]\n    }\n  }\n]"

Frigate stats

{"Back_Alley":{"camera_fps":5.1,"capture_pid":288,"detection_enabled":1,"detection_fps":0.1,"ffmpeg_pid":294,"pid":283,"process_fps":5.1,"skipped_fps":0.0},"Front_Garage":{"camera_fps":5.0,"capture_pid":284,"detection_enabled":1,"detection_fps":0.0,"ffmpeg_pid":290,"pid":280,"process_fps":5.0,"skipped_fps":0.0},"cpu_usages":{"%Cpu(s):":{"cpu":"id,","mem":"0.1"},"1":{"cpu":"0.0","mem":"0.0"},"103":{"cpu":"0.0","mem":"0.1"},"122":{"cpu":"0.0","mem":"0.0"},"123":{"cpu":"0.0","mem":"0.1"},"124":{"cpu":"0.0","mem":"0.0"},"125":{"cpu":"0.0","mem":"0.4"},"15":{"cpu":"0.0","mem":"0.0"},"16":{"cpu":"0.0","mem":"0.0"},"24":{"cpu":"0.0","mem":"0.0"},"25":{"cpu":"0.0","mem":"0.0"},"26":{"cpu":"0.0","mem":"0.0"},"269":{"cpu":"0.0","mem":"1.8"},"27":{"cpu":"0.0","mem":"0.0"},"275":{"cpu":"0.3","mem":"0.1"},"276":{"cpu":"57.0","mem":"2.0"},"278":{"cpu":"1.0","mem":"2.0"},"28":{"cpu":"0.0","mem":"0.0"},"280":{"cpu":"1.3","mem":"2.1"},"283":{"cpu":"0.7","mem":"2.1"},"284":{"cpu":"1.7","mem":"1.9"},"286":{"cpu":"0.0","mem":"0.4"},"288":{"cpu":"1.7","mem":"1.9"},"29":{"cpu":"0.0","mem":"0.0"},"290":{"cpu":"7.3","mem":"2.1"},"293":{"cpu":"0.0","mem":"0.4"},"294":{"cpu":"4.0","mem":"2.1"},"297":{"cpu":"0.0","mem":"0.7"},"30":{"cpu":"0.0","mem":"0.0"},"31":{"cpu":"0.0","mem":"0.0"},"40":{"cpu":"0.0","mem":"0.0"},"41":{"cpu":"0.0","mem":"0.0"},"77":{"cpu":"0.0","mem":"0.0"},"78":{"cpu":"0.0","mem":"0.0"},"79":{"cpu":"0.0","mem":"0.0"},"86":{"cpu":"0.0","mem":"0.0"},"90669":{"cpu":"0.0","mem":"0.0"},"90674":{"cpu":"0.0","mem":"0.0"},"96":{"cpu":"4.0","mem":"4.4"},"MiB":{"cpu":"5870.2","mem":"avail"},"PID":{"cpu":"%CPU","mem":"%MEM"},"Tasks:":{"cpu":"stopped,","mem":"0"},"top":{"cpu":"0","mem":"users,"}},"detection_fps":0.1,"detectors":{"cpu":{"detection_start":0.0,"inference_speed":92.61,"pid":276}},"gpu_usages":{"NVIDIA GeForce GTX 1080":{"gpu":"1 %","mem":"5.7 %"}},"service":{"latest_version":"0.11.1","storage":{"/dev/shm":{"free":255.4,"mount_type":"tmpfs","total":268.4,"used":13.1},"/media/frigate/clips":{"free":12461431.1,"mount_type":"nfs","total":15343341.3,"used":2881910.2},"/media/frigate/recordings":{"free":12461431.1,"mount_type":"nfs","total":15343341.3,"used":2881910.2},"/tmp/cache":{"free":993.4,"mount_type":"tmpfs","total":1000.0,"used":6.6}},"temperatures":{},"uptime":63828,"version":"0.12.0-edbdbb7"}}

Operating system

Other Linux

Install method

Docker CLI

Coral version

Other

Network connection

Wired

Camera make and model

UVC G3 Flex

Any other information that may be helpful

No response

bughattiveyron avatar Jan 27 '23 16:01 bughattiveyron

You need to mount the trt-models to /trt-models in the container.

NickM-27 avatar Jan 27 '23 17:01 NickM-27

docker run

docker run -d  --gpus "device=1" --name frigate_tensor   --restart=unless-stopped   --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000   --device /dev/bus/usb:/dev/bus/usb   --shm-size=256m   -v /volume2/frigate/_data:/media/frigate   -v /volume2/trtmodels:/trt-models   -v /var/lib/docker/volumes/frigate_config/_data/config.yml:/config/config.yml:ro   -v /etc/localtime:/etc/localtime:ro   -e YOLO_MODELS=yolov7-tiny-416   -e FRIGATE_RTSP_PASSWORD='secret'   -p 5000:5000   -p 1935:1935   ghcr.io/blakeblackshear/frigate:dev-edbdbb7-tensorrt

Config

mqtt:
  host: host
  port: 1883
  topic_prefix: frigate
  client_id: frigate
  user: liquidmosq
  password: secure

birdseye:
  enabled: True
  width: 1920
  height: 1080
  quality: 8
  mode: objects

detectors:
  tensorrt:
    type: tensorrt

model:
  path: /trt-models/yolov7-tiny-416.trt
  input_tensor: nchw
  input_pixel_format: rgb
  width: 416
  height: 416

cameras:
  Front_Garage:
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -ar 44100 -c:a aac
        rtmp: -c:v copy -f flv -ar 44100 -c:a aac
      inputs:
        - path: camera
          roles:
            - detect
            - rtmp
      hwaccel_args: preset-nvidia-h264
    objects:
      track:
        - person
        - car
        - truck
        - bicycle
        - motorcycle
        - dog
        - cat
    snapshots:
      enabled: True
    record:
      enabled: True
      retain:
        days: 90
        mode: motion
      events:
        retain:
          default: 90
          mode: active_objects
    motion:
      mask:
        - 0,151,1280,188,1280,0,0,0
        - 680,440,997,720,594,720,388,720,388,451
  Back_Alley:
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -ar 44100 -c:a aac
        rtmp: -c:v copy -f flv -ar 44100 -c:a aac
      inputs:
        - path: camera
          roles:
            - detect
            - rtmp
      hwaccel_args: preset-nvidia-h264
    objects:
      track:
        - person
        - car
        - truck
        - bicycle
        - motorcycle
        - dog
        - cat
    snapshots:
      enabled: True
    record:
      enabled: True
      retain:
        days: 90
        mode: motion
      events:
        retain:
          default: 90
          mode: active_objects
    motion:
      mask:
        - 804,720,815,518,733,179,480,0,270,0,114,127,0,253,0,720
        - 1109,215,948,186,942,275,1045,420,1186,430,1212,284
        - 708,168,1093,165,1182,152,1159,112,599,90

Logs

2023-01-27 17:21:09.806959204  [2023-01-27 17:21:09] frigate.app                    INFO    : Starting Frigate (0.12.0-edbdbb7)
2023-01-27 17:21:09.838088050  [2023-01-27 17:21:09] peewee_migrate                 INFO    : Starting migrations
2023-01-27 17:21:09.866437553  [2023-01-27 17:21:09] peewee_migrate                 INFO    : There is nothing to migrate
2023-01-27 17:21:09.873016487  [2023-01-27 17:21:09] ws4py                          INFO    : Using epoll
2023-01-27 17:21:09.898083354  [2023-01-27 17:21:09] frigate.app                    INFO    : Output process started: 282
2023-01-27 17:21:09.910246110  [2023-01-27 17:21:09] frigate.app                    INFO    : Camera processor started for Front_Garage: 286
2023-01-27 17:21:09.922009726  [2023-01-27 17:21:09] frigate.app                    INFO    : Camera processor started for Back_Alley: 288
2023-01-27 17:21:09.922018966  [2023-01-27 17:21:09] ws4py                          INFO    : Using epoll
2023-01-27 17:21:09.932574655  [2023-01-27 17:21:09] frigate.app                    INFO    : Capture process started for Front_Garage: 290
2023-01-27 17:21:09.933833771  [2023-01-27 17:21:09] frigate.app                    INFO    : Capture process started for Back_Alley: 292
2023-01-27 17:21:10.010501654  [2023-01-27 17:21:09] detector.tensorrt              INFO    : Starting detection process: 280
2023-01-27 17:21:10.013048388  Process detector:tensorrt:
2023-01-27 17:21:10.013056291  [2023-01-27 17:21:10] frigate.detectors.plugins.tensorrt ERROR   : ERROR: failed to load libraries. /trt-models/libyolo_layer.so: cannot open shared object file: No such file or directory
2023-01-27 17:21:10.014000282  Traceback (most recent call last):
2023-01-27 17:21:10.014024129    File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-01-27 17:21:10.014026397      self.run()
2023-01-27 17:21:10.014028154    File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-01-27 17:21:10.014029822      self._target(*self._args, **self._kwargs)
2023-01-27 17:21:10.014033771    File "/opt/frigate/frigate/object_detection.py", line 97, in run_detector
2023-01-27 17:21:10.014054302      object_detector = LocalObjectDetector(detector_config=detector_config)
2023-01-27 17:21:10.014056692    File "/opt/frigate/frigate/object_detection.py", line 52, in __init__
2023-01-27 17:21:10.014058938      self.detect_api = create_detector(detector_config)
2023-01-27 17:21:10.014060639    File "/opt/frigate/frigate/detectors/__init__.py", line 24, in create_detector
2023-01-27 17:21:10.014062253      return api(detector_config)
2023-01-27 17:21:10.014063990    File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 219, in __init__
2023-01-27 17:21:10.014084254      self.engine = self._load_engine(detector_config.model.path)
2023-01-27 17:21:10.014086281    File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 87, in _load_engine
2023-01-27 17:21:10.014098672      with open(model_path, "rb") as f, trt.Runtime(self.trt_logger) as runtime:
2023-01-27 17:21:10.014112978  FileNotFoundError: [Errno 2] No such file or directory: '/trt-models/yolov7-tiny-416.trt'
2023-01-27 17:21:10.014114932  Exception ignored in: <function TensorRtDetector.__del__ at 0x7fa852bf3c10>
2023-01-27 17:21:10.014116353  Traceback (most recent call last):
2023-01-27 17:21:10.014118114    File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 238, in __del__
2023-01-27 17:21:10.014264893      if self.outputs is not None:
2023-01-27 17:21:10.014268572  AttributeError: 'TensorRtDetector' object has no attribute 'outputs'

bughattiveyron avatar Jan 27 '23 17:01 bughattiveyron

Output from this command

docker run --gpus=all --rm -it -v `pwd`/volume2//trtmodels:/tensorrt_models -v `pwd`/tensorrt_models.sh:/tensorrt_models.sh nvcr.io/nvidia/tensorrt:22.07-py3 /tensorrt_models.sh

Im not seeing any data in my trt-models folder

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 22.07 (build 40077977)
NVIDIA TensorRT Version 8.4.1
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

+ CUDA_HOME=/usr/local/cuda
+ LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
+ OUTPUT_FOLDER=/tensorrt_models
+ echo 'Generating the following TRT Models: yolov4-tiny-288,yolov4-tiny-416,yolov7-tiny-416'
Generating the following TRT Models: yolov4-tiny-288,yolov4-tiny-416,yolov7-tiny-416
+ mkdir -p /tensorrt_models
+ pip install --upgrade pip
Requirement already satisfied: pip in /usr/local/lib/python3.8/dist-packages (22.1.2)
Collecting pip
  Downloading pip-22.3.1-py3-none-any.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 10.1 MB/s eta 0:00:00
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 22.1.2
    Uninstalling pip-22.1.2:
      Successfully uninstalled pip-22.1.2
Successfully installed pip-22.3.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
+ pip install onnx==1.9.0 protobuf==3.20.3
Collecting onnx==1.9.0
  Downloading onnx-1.9.0-cp38-cp38-manylinux2010_x86_64.whl (12.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 34.6 MB/s eta 0:00:00
Collecting protobuf==3.20.3
  Downloading protobuf-3.20.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 49.6 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.8/dist-packages (from onnx==1.9.0) (4.2.0)
Collecting six
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.8/dist-packages (from onnx==1.9.0) (1.23.0)
Installing collected packages: six, protobuf, onnx
  Attempting uninstall: protobuf
    Found existing installation: protobuf 4.21.2
    Uninstalling protobuf-4.21.2:
      Successfully uninstalled protobuf-4.21.2
Successfully installed onnx-1.9.0 protobuf-3.20.3 six-1.16.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
+ git clone --depth 1 https://github.com/yeahme49/tensorrt_demos.git /tensorrt_demos
Cloning into '/tensorrt_demos'...
remote: Enumerating objects: 118, done.
remote: Counting objects: 100% (118/118), done.
remote: Compressing objects: 100% (112/112), done.
remote: Total 118 (delta 12), reused 66 (delta 4), pack-reused 0
Receiving objects: 100% (118/118), 192.06 MiB | 26.15 MiB/s, done.
Resolving deltas: 100% (12/12), done.
Updating files: 100% (109/109), done.
+ cd /tensorrt_demos/plugins
+ make all
computes: 61
NVCCFLAGS: -gencode arch=compute_61,code=[sm_61,compute_61]
nvcc -ccbin g++ -I"/usr/local/cuda/include" -I"/usr/local/TensorRT-7.1.3.4/include" -I"/usr/local/include" -I"plugin" -gencode arch=compute_61,code=[sm_61,compute_61] -Xcompiler -fPIC -c -o yolo_layer.o yolo_layer.cu
yolo_layer.h(89): warning #997-D: function "nvinfer1::IPluginV2Ext::configurePlugin(const nvinfer1::Dims *, int32_t, const nvinfer1::Dims *, int32_t, const nvinfer1::DataType *, const nvinfer1::DataType *, const __nv_bool *, const __nv_bool *, nvinfer1::PluginFormat, int32_t)" is hidden by "nvinfer1::YoloLayerPlugin::configurePlugin" -- virtual function override intended?

yolo_layer.h(89): warning #997-D: function "nvinfer1::IPluginV2Ext::configurePlugin(const nvinfer1::Dims *, int32_t, const nvinfer1::Dims *, int32_t, const nvinfer1::DataType *, const nvinfer1::DataType *, const bool *, const bool *, nvinfer1::PluginFormat, int32_t)" is hidden by "nvinfer1::YoloLayerPlugin::configurePlugin" -- virtual function override intended?

g++ -shared -o libyolo_layer.so yolo_layer.o -L"/usr/local/cuda/lib64" -L"/usr/local/TensorRT-7.1.3.4/lib" -L"/usr/local/lib" -Wl,--start-group -lnvinfer -lnvparsers -lnvinfer_plugin -lcudnn -lcublas -lnvToolsExt -lcudart -lrt -ldl -lpthread -Wl,--end-group
+ cp libyolo_layer.so /tensorrt_models/libyolo_layer.so
+ cd /tensorrt_demos/yolo
+ ./download_yolo.sh
yolov3-tiny.cfg                                   100%[============================================================================================================>]   1.87K  --.-KB/s    in 0s
yolov3-tiny.weights                               100%[============================================================================================================>]  33.79M  49.0MB/s    in 0.7s
yolov3.cfg                                        100%[============================================================================================================>]   8.15K  --.-KB/s    in 0s
yolov3.weights                                    100%[============================================================================================================>] 236.52M  20.1MB/s    in 10s
yolov3-spp.cfg                                    100%[============================================================================================================>]   8.40K  --.-KB/s    in 0s
yolov3-spp.weights                                100%[============================================================================================================>] 240.53M  67.2MB/s    in 4.0s
yolov4-tiny.cfg                                   100%[============================================================================================================>]   3.16K  --.-KB/s    in 0s
yolov4-tiny.weights                               100%[============================================================================================================>]  23.13M  13.6MB/s    in 1.7s
yolov4.cfg                                        100%[============================================================================================================>]  11.94K  --.-KB/s    in 0.001s
yolov4.weights                                    100%[============================================================================================================>] 245.78M  26.7MB/s    in 9.7s
yolov4-csp.cfg                                    100%[============================================================================================================>]  13.16K  --.-KB/s    in 0.001s
yolov4-csp.weights                                100%[============================================================================================================>] 202.13M  6.07MB/s    in 18s
yolov4x-mish.cfg                                  100%[============================================================================================================>]  14.76K  --.-KB/s    in 0.003s
yolov4x-mish.weights                              100%[============================================================================================================>] 380.89M  4.35MB/s    in 50s
yolov4-p5.cfg                                     100%[============================================================================================================>]  18.94K  --.-KB/s    in 0.003s
yolov4-p5.weights                                 100%[============================================================================================================>] 270.53M  12.8MB/s    in 16s
yolov7-tiny.cfg                                   100%[============================================================================================================>]   7.38K  --.-KB/s    in 0s
yolov7-tiny.weights                               100%[============================================================================================================>]  23.81M  5.59MB/s    in 4.5s

Creating yolov3-tiny-288.cfg and yolov3-tiny-288.weights
Creating yolov3-tiny-416.cfg and yolov3-tiny-416.weights
Creating yolov3-288.cfg and yolov3-288.weights
Creating yolov3-416.cfg and yolov3-416.weights
Creating yolov3-608.cfg and yolov3-608.weights
Creating yolov3-spp-288.cfg and yolov3-spp-288.weights
Creating yolov3-spp-416.cfg and yolov3-spp-416.weights
Creating yolov3-spp-608.cfg and yolov3-spp-608.weights
Creating yolov4-tiny-288.cfg and yolov4-tiny-288.weights
Creating yolov4-tiny-416.cfg and yolov4-tiny-416.weights
Creating yolov4-288.cfg and yolov4-288.weights
Creating yolov4-416.cfg and yolov4-416.weights
Creating yolov4-608.cfg and yolov4-608.weights
Creating yolov4-csp-256.cfg and yolov4-csp-256.weights
Creating yolov4-csp-512.cfg and yolov4x-csp-512.weights
Creating yolov4x-mish-320.cfg and yolov4x-mish-320.weights
Creating yolov4x-mish-640.cfg and yolov4x-mish-640.weights
Creating yolov4-p5-448.cfg and yolov4-p5-448.weights
Creating yolov4-p5-896.cfg and yolov4-p5-896.weights
Creating yolov7-tiny-288.cfg and yolov7-tiny-288.weights
Creating yolov7-tiny-416.cfg and yolov7-tiny-416.weights

Done.
+ cd /tensorrt_demos/yolo
+ for model in ${YOLO_MODELS//,/ }
+ python3 yolo_to_onnx.py -m yolov4-tiny-288
Parsing DarkNet cfg file...
Building ONNX graph...
graph yolov4-tiny-288 (
  %000_net[FLOAT, 1x3x288x288]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
  %001_convolutional_bn_var[FLOAT, 32]
  %001_convolutional_conv_weights[FLOAT, 32x3x3x3]
  %002_convolutional_bn_scale[FLOAT, 64]
  %002_convolutional_bn_bias[FLOAT, 64]
  %002_convolutional_bn_mean[FLOAT, 64]
  %002_convolutional_bn_var[FLOAT, 64]
  %002_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %003_convolutional_bn_scale[FLOAT, 64]
  %003_convolutional_bn_bias[FLOAT, 64]
  %003_convolutional_bn_mean[FLOAT, 64]
  %003_convolutional_bn_var[FLOAT, 64]
  %003_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %005_convolutional_bn_scale[FLOAT, 32]
  %005_convolutional_bn_bias[FLOAT, 32]
  %005_convolutional_bn_mean[FLOAT, 32]
  %005_convolutional_bn_var[FLOAT, 32]
  %005_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %006_convolutional_bn_scale[FLOAT, 32]
  %006_convolutional_bn_bias[FLOAT, 32]
  %006_convolutional_bn_mean[FLOAT, 32]
  %006_convolutional_bn_var[FLOAT, 32]
  %006_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %008_convolutional_bn_scale[FLOAT, 64]
  %008_convolutional_bn_bias[FLOAT, 64]
  %008_convolutional_bn_mean[FLOAT, 64]
  %008_convolutional_bn_var[FLOAT, 64]
  %008_convolutional_conv_weights[FLOAT, 64x64x1x1]
  %011_convolutional_bn_scale[FLOAT, 128]
  %011_convolutional_bn_bias[FLOAT, 128]
  %011_convolutional_bn_mean[FLOAT, 128]
  %011_convolutional_bn_var[FLOAT, 128]
  %011_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %013_convolutional_bn_scale[FLOAT, 64]
  %013_convolutional_bn_bias[FLOAT, 64]
  %013_convolutional_bn_mean[FLOAT, 64]
  %013_convolutional_bn_var[FLOAT, 64]
  %013_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %014_convolutional_bn_scale[FLOAT, 64]
  %014_convolutional_bn_bias[FLOAT, 64]
  %014_convolutional_bn_mean[FLOAT, 64]
  %014_convolutional_bn_var[FLOAT, 64]
  %014_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %016_convolutional_bn_scale[FLOAT, 128]
  %016_convolutional_bn_bias[FLOAT, 128]
  %016_convolutional_bn_mean[FLOAT, 128]
  %016_convolutional_bn_var[FLOAT, 128]
  %016_convolutional_conv_weights[FLOAT, 128x128x1x1]
  %019_convolutional_bn_scale[FLOAT, 256]
  %019_convolutional_bn_bias[FLOAT, 256]
  %019_convolutional_bn_mean[FLOAT, 256]
  %019_convolutional_bn_var[FLOAT, 256]
  %019_convolutional_conv_weights[FLOAT, 256x256x3x3]
  %021_convolutional_bn_scale[FLOAT, 128]
  %021_convolutional_bn_bias[FLOAT, 128]
  %021_convolutional_bn_mean[FLOAT, 128]
  %021_convolutional_bn_var[FLOAT, 128]
  %021_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %022_convolutional_bn_scale[FLOAT, 128]
  %022_convolutional_bn_bias[FLOAT, 128]
  %022_convolutional_bn_mean[FLOAT, 128]
  %022_convolutional_bn_var[FLOAT, 128]
  %022_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %024_convolutional_bn_scale[FLOAT, 256]
  %024_convolutional_bn_bias[FLOAT, 256]
  %024_convolutional_bn_mean[FLOAT, 256]
  %024_convolutional_bn_var[FLOAT, 256]
  %024_convolutional_conv_weights[FLOAT, 256x256x1x1]
  %027_convolutional_bn_scale[FLOAT, 512]
  %027_convolutional_bn_bias[FLOAT, 512]
  %027_convolutional_bn_mean[FLOAT, 512]
  %027_convolutional_bn_var[FLOAT, 512]
  %027_convolutional_conv_weights[FLOAT, 512x512x3x3]
  %028_convolutional_bn_scale[FLOAT, 256]
  %028_convolutional_bn_bias[FLOAT, 256]
  %028_convolutional_bn_mean[FLOAT, 256]
  %028_convolutional_bn_var[FLOAT, 256]
  %028_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %029_convolutional_bn_scale[FLOAT, 512]
  %029_convolutional_bn_bias[FLOAT, 512]
  %029_convolutional_bn_mean[FLOAT, 512]
  %029_convolutional_bn_var[FLOAT, 512]
  %029_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %030_convolutional_conv_bias[FLOAT, 255]
  %030_convolutional_conv_weights[FLOAT, 255x512x1x1]
  %033_convolutional_bn_scale[FLOAT, 128]
  %033_convolutional_bn_bias[FLOAT, 128]
  %033_convolutional_bn_mean[FLOAT, 128]
  %033_convolutional_bn_var[FLOAT, 128]
  %033_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %034_upsample_scale[FLOAT, 4]
  %034_upsample_roi[FLOAT, 4]
  %036_convolutional_bn_scale[FLOAT, 256]
  %036_convolutional_bn_bias[FLOAT, 256]
  %036_convolutional_bn_mean[FLOAT, 256]
  %036_convolutional_bn_var[FLOAT, 256]
  %036_convolutional_conv_weights[FLOAT, 256x384x3x3]
  %037_convolutional_conv_bias[FLOAT, 255]
  %037_convolutional_conv_weights[FLOAT, 255x256x1x1]
) {
  %001_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%000_net, %001_convolutional_conv_weights)
  %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var)
  %001_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%001_convolutional_bn)
  %002_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%001_convolutional_lrelu, %002_convolutional_conv_weights)
  %002_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%002_convolutional, %002_convolutional_bn_scale, %002_convolutional_bn_bias, %002_convolutional_bn_mean, %002_convolutional_bn_var)
  %002_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%002_convolutional_bn)
  %003_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%002_convolutional_lrelu, %003_convolutional_conv_weights)
  %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var)
  %003_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%003_convolutional_bn)
  %004_route_dummy0, %004_route = Split[axis = 1](%003_convolutional_lrelu)
  %005_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%004_route, %005_convolutional_conv_weights)
  %005_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%005_convolutional, %005_convolutional_bn_scale, %005_convolutional_bn_bias, %005_convolutional_bn_mean, %005_convolutional_bn_var)
  %005_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%005_convolutional_bn)
  %006_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%005_convolutional_lrelu, %006_convolutional_conv_weights)
  %006_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%006_convolutional, %006_convolutional_bn_scale, %006_convolutional_bn_bias, %006_convolutional_bn_mean, %006_convolutional_bn_var)
  %006_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%006_convolutional_bn)
  %007_route = Concat[axis = 1](%006_convolutional_lrelu, %005_convolutional_lrelu)
  %008_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%007_route, %008_convolutional_conv_weights)
  %008_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%008_convolutional, %008_convolutional_bn_scale, %008_convolutional_bn_bias, %008_convolutional_bn_mean, %008_convolutional_bn_var)
  %008_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%008_convolutional_bn)
  %009_route = Concat[axis = 1](%003_convolutional_lrelu, %008_convolutional_lrelu)
  %010_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%009_route)
  %011_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%010_maxpool, %011_convolutional_conv_weights)
  %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var)
  %011_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%011_convolutional_bn)
  %012_route_dummy0, %012_route = Split[axis = 1](%011_convolutional_lrelu)
  %013_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%012_route, %013_convolutional_conv_weights)
  %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var)
  %013_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%013_convolutional_bn)
  %014_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights)
  %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var)
  %014_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%014_convolutional_bn)
  %015_route = Concat[axis = 1](%014_convolutional_lrelu, %013_convolutional_lrelu)
  %016_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%015_route, %016_convolutional_conv_weights)
  %016_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%016_convolutional, %016_convolutional_bn_scale, %016_convolutional_bn_bias, %016_convolutional_bn_mean, %016_convolutional_bn_var)
  %016_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%016_convolutional_bn)
  %017_route = Concat[axis = 1](%011_convolutional_lrelu, %016_convolutional_lrelu)
  %018_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%017_route)
  %019_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%018_maxpool, %019_convolutional_conv_weights)
  %019_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%019_convolutional, %019_convolutional_bn_scale, %019_convolutional_bn_bias, %019_convolutional_bn_mean, %019_convolutional_bn_var)
  %019_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%019_convolutional_bn)
  %020_route_dummy0, %020_route = Split[axis = 1](%019_convolutional_lrelu)
  %021_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%020_route, %021_convolutional_conv_weights)
  %021_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%021_convolutional, %021_convolutional_bn_scale, %021_convolutional_bn_bias, %021_convolutional_bn_mean, %021_convolutional_bn_var)
  %021_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%021_convolutional_bn)
  %022_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%021_convolutional_lrelu, %022_convolutional_conv_weights)
  %022_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%022_convolutional, %022_convolutional_bn_scale, %022_convolutional_bn_bias, %022_convolutional_bn_mean, %022_convolutional_bn_var)
  %022_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%022_convolutional_bn)
  %023_route = Concat[axis = 1](%022_convolutional_lrelu, %021_convolutional_lrelu)
  %024_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%023_route, %024_convolutional_conv_weights)
  %024_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%024_convolutional, %024_convolutional_bn_scale, %024_convolutional_bn_bias, %024_convolutional_bn_mean, %024_convolutional_bn_var)
  %024_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%024_convolutional_bn)
  %025_route = Concat[axis = 1](%019_convolutional_lrelu, %024_convolutional_lrelu)
  %026_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%025_route)
  %027_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%026_maxpool, %027_convolutional_conv_weights)
  %027_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%027_convolutional, %027_convolutional_bn_scale, %027_convolutional_bn_bias, %027_convolutional_bn_mean, %027_convolutional_bn_var)
  %027_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%027_convolutional_bn)
  %028_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%027_convolutional_lrelu, %028_convolutional_conv_weights)
  %028_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%028_convolutional, %028_convolutional_bn_scale, %028_convolutional_bn_bias, %028_convolutional_bn_mean, %028_convolutional_bn_var)
  %028_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%028_convolutional_bn)
  %029_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%028_convolutional_lrelu, %029_convolutional_conv_weights)
  %029_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%029_convolutional, %029_convolutional_bn_scale, %029_convolutional_bn_bias, %029_convolutional_bn_mean, %029_convolutional_bn_var)
  %029_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%029_convolutional_bn)
  %030_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%029_convolutional_lrelu, %030_convolutional_conv_weights, %030_convolutional_conv_bias)
  %033_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%028_convolutional_lrelu, %033_convolutional_conv_weights)
  %033_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%033_convolutional, %033_convolutional_bn_scale, %033_convolutional_bn_bias, %033_convolutional_bn_mean, %033_convolutional_bn_var)
  %033_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%033_convolutional_bn)
  %034_upsample = Resize[coordinate_transformation_mode = 'asymmetric', mode = 'nearest', nearest_mode = 'floor'](%033_convolutional_lrelu, %034_upsample_roi, %034_upsample_scale)
  %035_route = Concat[axis = 1](%034_upsample, %024_convolutional_lrelu)
  %036_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%035_route, %036_convolutional_conv_weights)
  %036_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%036_convolutional, %036_convolutional_bn_scale, %036_convolutional_bn_bias, %036_convolutional_bn_mean, %036_convolutional_bn_var)
  %036_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%036_convolutional_bn)
  %037_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%036_convolutional_lrelu, %037_convolutional_conv_weights, %037_convolutional_conv_bias)
  return %030_convolutional, %037_convolutional
}
Checking ONNX model...
Saving ONNX file...
Done.
+ python3 onnx_to_tensorrt.py -m yolov4-tiny-288
Loading the ONNX file...
Adding yolo_layer plugins.
Adding a concatenated output as "detections".
Naming the input tensort as "input".
Building the TensorRT engine.  This would take a while...
(Use "--verbose" or "-v" to enable verbose logging.)
onnx_to_tensorrt.py:147: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
  builder.max_batch_size = MAX_BATCH_SIZE
onnx_to_tensorrt.py:149: DeprecationWarning: Use set_memory_pool_limit instead.
  config.max_workspace_size = 1 << 30
onnx_to_tensorrt.py:172: DeprecationWarning: Use build_serialized_network instead.
  engine = builder.build_engine(network, config)
[01/27/2023-17:29:33] [TRT] [W] FP16 support requested on hardware without native FP16 support, performance will be negatively affected.
[01/27/2023-17:29:46] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:46] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:46] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:47] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:47] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:47] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:47] [TRT] [W] Weights [name=005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:47] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:47] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:47] [TRT] [W] Weights [name=005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:47] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:47] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:54] [TRT] [W] Weights [name=006_convolutional + 006_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=014_convolutional + 014_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:55] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:56] [TRT] [W] Weights [name=021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:57] [TRT] [W] Weights [name=022_convolutional + 022_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:57] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:57] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:57] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:57] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:58] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:58] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:58] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:58] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:58] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:58] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:58] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:59] [TRT] [W] Weights [name=029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:59] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:59] [TRT] [W] Weights [name=029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:59] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:59] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:59] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:59] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:59] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:29:59] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:29:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:29:59] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:29:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:00] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Completed creating engine.
[01/27/2023-17:30:00] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[01/27/2023-17:30:00] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Serialized the TensorRT engine to file: yolov4-tiny-288.trt
+ cp /tensorrt_demos/yolo/yolov4-tiny-288.trt /tensorrt_models/yolov4-tiny-288.trt
+ for model in ${YOLO_MODELS//,/ }
+ python3 yolo_to_onnx.py -m yolov4-tiny-416
Parsing DarkNet cfg file...
Building ONNX graph...
graph yolov4-tiny-416 (
  %000_net[FLOAT, 1x3x416x416]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
  %001_convolutional_bn_var[FLOAT, 32]
  %001_convolutional_conv_weights[FLOAT, 32x3x3x3]
  %002_convolutional_bn_scale[FLOAT, 64]
  %002_convolutional_bn_bias[FLOAT, 64]
  %002_convolutional_bn_mean[FLOAT, 64]
  %002_convolutional_bn_var[FLOAT, 64]
  %002_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %003_convolutional_bn_scale[FLOAT, 64]
  %003_convolutional_bn_bias[FLOAT, 64]
  %003_convolutional_bn_mean[FLOAT, 64]
  %003_convolutional_bn_var[FLOAT, 64]
  %003_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %005_convolutional_bn_scale[FLOAT, 32]
  %005_convolutional_bn_bias[FLOAT, 32]
  %005_convolutional_bn_mean[FLOAT, 32]
  %005_convolutional_bn_var[FLOAT, 32]
  %005_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %006_convolutional_bn_scale[FLOAT, 32]
  %006_convolutional_bn_bias[FLOAT, 32]
  %006_convolutional_bn_mean[FLOAT, 32]
  %006_convolutional_bn_var[FLOAT, 32]
  %006_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %008_convolutional_bn_scale[FLOAT, 64]
  %008_convolutional_bn_bias[FLOAT, 64]
  %008_convolutional_bn_mean[FLOAT, 64]
  %008_convolutional_bn_var[FLOAT, 64]
  %008_convolutional_conv_weights[FLOAT, 64x64x1x1]
  %011_convolutional_bn_scale[FLOAT, 128]
  %011_convolutional_bn_bias[FLOAT, 128]
  %011_convolutional_bn_mean[FLOAT, 128]
  %011_convolutional_bn_var[FLOAT, 128]
  %011_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %013_convolutional_bn_scale[FLOAT, 64]
  %013_convolutional_bn_bias[FLOAT, 64]
  %013_convolutional_bn_mean[FLOAT, 64]
  %013_convolutional_bn_var[FLOAT, 64]
  %013_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %014_convolutional_bn_scale[FLOAT, 64]
  %014_convolutional_bn_bias[FLOAT, 64]
  %014_convolutional_bn_mean[FLOAT, 64]
  %014_convolutional_bn_var[FLOAT, 64]
  %014_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %016_convolutional_bn_scale[FLOAT, 128]
  %016_convolutional_bn_bias[FLOAT, 128]
  %016_convolutional_bn_mean[FLOAT, 128]
  %016_convolutional_bn_var[FLOAT, 128]
  %016_convolutional_conv_weights[FLOAT, 128x128x1x1]
  %019_convolutional_bn_scale[FLOAT, 256]
  %019_convolutional_bn_bias[FLOAT, 256]
  %019_convolutional_bn_mean[FLOAT, 256]
  %019_convolutional_bn_var[FLOAT, 256]
  %019_convolutional_conv_weights[FLOAT, 256x256x3x3]
  %021_convolutional_bn_scale[FLOAT, 128]
  %021_convolutional_bn_bias[FLOAT, 128]
  %021_convolutional_bn_mean[FLOAT, 128]
  %021_convolutional_bn_var[FLOAT, 128]
  %021_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %022_convolutional_bn_scale[FLOAT, 128]
  %022_convolutional_bn_bias[FLOAT, 128]
  %022_convolutional_bn_mean[FLOAT, 128]
  %022_convolutional_bn_var[FLOAT, 128]
  %022_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %024_convolutional_bn_scale[FLOAT, 256]
  %024_convolutional_bn_bias[FLOAT, 256]
  %024_convolutional_bn_mean[FLOAT, 256]
  %024_convolutional_bn_var[FLOAT, 256]
  %024_convolutional_conv_weights[FLOAT, 256x256x1x1]
  %027_convolutional_bn_scale[FLOAT, 512]
  %027_convolutional_bn_bias[FLOAT, 512]
  %027_convolutional_bn_mean[FLOAT, 512]
  %027_convolutional_bn_var[FLOAT, 512]
  %027_convolutional_conv_weights[FLOAT, 512x512x3x3]
  %028_convolutional_bn_scale[FLOAT, 256]
  %028_convolutional_bn_bias[FLOAT, 256]
  %028_convolutional_bn_mean[FLOAT, 256]
  %028_convolutional_bn_var[FLOAT, 256]
  %028_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %029_convolutional_bn_scale[FLOAT, 512]
  %029_convolutional_bn_bias[FLOAT, 512]
  %029_convolutional_bn_mean[FLOAT, 512]
  %029_convolutional_bn_var[FLOAT, 512]
  %029_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %030_convolutional_conv_bias[FLOAT, 255]
  %030_convolutional_conv_weights[FLOAT, 255x512x1x1]
  %033_convolutional_bn_scale[FLOAT, 128]
  %033_convolutional_bn_bias[FLOAT, 128]
  %033_convolutional_bn_mean[FLOAT, 128]
  %033_convolutional_bn_var[FLOAT, 128]
  %033_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %034_upsample_scale[FLOAT, 4]
  %034_upsample_roi[FLOAT, 4]
  %036_convolutional_bn_scale[FLOAT, 256]
  %036_convolutional_bn_bias[FLOAT, 256]
  %036_convolutional_bn_mean[FLOAT, 256]
  %036_convolutional_bn_var[FLOAT, 256]
  %036_convolutional_conv_weights[FLOAT, 256x384x3x3]
  %037_convolutional_conv_bias[FLOAT, 255]
  %037_convolutional_conv_weights[FLOAT, 255x256x1x1]
) {
  %001_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%000_net, %001_convolutional_conv_weights)
  %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var)
  %001_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%001_convolutional_bn)
  %002_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%001_convolutional_lrelu, %002_convolutional_conv_weights)
  %002_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%002_convolutional, %002_convolutional_bn_scale, %002_convolutional_bn_bias, %002_convolutional_bn_mean, %002_convolutional_bn_var)
  %002_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%002_convolutional_bn)
  %003_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%002_convolutional_lrelu, %003_convolutional_conv_weights)
  %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var)
  %003_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%003_convolutional_bn)
  %004_route_dummy0, %004_route = Split[axis = 1](%003_convolutional_lrelu)
  %005_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%004_route, %005_convolutional_conv_weights)
  %005_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%005_convolutional, %005_convolutional_bn_scale, %005_convolutional_bn_bias, %005_convolutional_bn_mean, %005_convolutional_bn_var)
  %005_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%005_convolutional_bn)
  %006_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%005_convolutional_lrelu, %006_convolutional_conv_weights)
  %006_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%006_convolutional, %006_convolutional_bn_scale, %006_convolutional_bn_bias, %006_convolutional_bn_mean, %006_convolutional_bn_var)
  %006_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%006_convolutional_bn)
  %007_route = Concat[axis = 1](%006_convolutional_lrelu, %005_convolutional_lrelu)
  %008_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%007_route, %008_convolutional_conv_weights)
  %008_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%008_convolutional, %008_convolutional_bn_scale, %008_convolutional_bn_bias, %008_convolutional_bn_mean, %008_convolutional_bn_var)
  %008_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%008_convolutional_bn)
  %009_route = Concat[axis = 1](%003_convolutional_lrelu, %008_convolutional_lrelu)
  %010_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%009_route)
  %011_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%010_maxpool, %011_convolutional_conv_weights)
  %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var)
  %011_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%011_convolutional_bn)
  %012_route_dummy0, %012_route = Split[axis = 1](%011_convolutional_lrelu)
  %013_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%012_route, %013_convolutional_conv_weights)
  %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var)
  %013_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%013_convolutional_bn)
  %014_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights)
  %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var)
  %014_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%014_convolutional_bn)
  %015_route = Concat[axis = 1](%014_convolutional_lrelu, %013_convolutional_lrelu)
  %016_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%015_route, %016_convolutional_conv_weights)
  %016_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%016_convolutional, %016_convolutional_bn_scale, %016_convolutional_bn_bias, %016_convolutional_bn_mean, %016_convolutional_bn_var)
  %016_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%016_convolutional_bn)
  %017_route = Concat[axis = 1](%011_convolutional_lrelu, %016_convolutional_lrelu)
  %018_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%017_route)
  %019_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%018_maxpool, %019_convolutional_conv_weights)
  %019_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%019_convolutional, %019_convolutional_bn_scale, %019_convolutional_bn_bias, %019_convolutional_bn_mean, %019_convolutional_bn_var)
  %019_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%019_convolutional_bn)
  %020_route_dummy0, %020_route = Split[axis = 1](%019_convolutional_lrelu)
  %021_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%020_route, %021_convolutional_conv_weights)
  %021_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%021_convolutional, %021_convolutional_bn_scale, %021_convolutional_bn_bias, %021_convolutional_bn_mean, %021_convolutional_bn_var)
  %021_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%021_convolutional_bn)
  %022_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%021_convolutional_lrelu, %022_convolutional_conv_weights)
  %022_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%022_convolutional, %022_convolutional_bn_scale, %022_convolutional_bn_bias, %022_convolutional_bn_mean, %022_convolutional_bn_var)
  %022_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%022_convolutional_bn)
  %023_route = Concat[axis = 1](%022_convolutional_lrelu, %021_convolutional_lrelu)
  %024_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%023_route, %024_convolutional_conv_weights)
  %024_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%024_convolutional, %024_convolutional_bn_scale, %024_convolutional_bn_bias, %024_convolutional_bn_mean, %024_convolutional_bn_var)
  %024_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%024_convolutional_bn)
  %025_route = Concat[axis = 1](%019_convolutional_lrelu, %024_convolutional_lrelu)
  %026_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%025_route)
  %027_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%026_maxpool, %027_convolutional_conv_weights)
  %027_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%027_convolutional, %027_convolutional_bn_scale, %027_convolutional_bn_bias, %027_convolutional_bn_mean, %027_convolutional_bn_var)
  %027_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%027_convolutional_bn)
  %028_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%027_convolutional_lrelu, %028_convolutional_conv_weights)
  %028_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%028_convolutional, %028_convolutional_bn_scale, %028_convolutional_bn_bias, %028_convolutional_bn_mean, %028_convolutional_bn_var)
  %028_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%028_convolutional_bn)
  %029_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%028_convolutional_lrelu, %029_convolutional_conv_weights)
  %029_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%029_convolutional, %029_convolutional_bn_scale, %029_convolutional_bn_bias, %029_convolutional_bn_mean, %029_convolutional_bn_var)
  %029_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%029_convolutional_bn)
  %030_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%029_convolutional_lrelu, %030_convolutional_conv_weights, %030_convolutional_conv_bias)
  %033_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%028_convolutional_lrelu, %033_convolutional_conv_weights)
  %033_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%033_convolutional, %033_convolutional_bn_scale, %033_convolutional_bn_bias, %033_convolutional_bn_mean, %033_convolutional_bn_var)
  %033_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%033_convolutional_bn)
  %034_upsample = Resize[coordinate_transformation_mode = 'asymmetric', mode = 'nearest', nearest_mode = 'floor'](%033_convolutional_lrelu, %034_upsample_roi, %034_upsample_scale)
  %035_route = Concat[axis = 1](%034_upsample, %024_convolutional_lrelu)
  %036_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%035_route, %036_convolutional_conv_weights)
  %036_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%036_convolutional, %036_convolutional_bn_scale, %036_convolutional_bn_bias, %036_convolutional_bn_mean, %036_convolutional_bn_var)
  %036_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%036_convolutional_bn)
  %037_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%036_convolutional_lrelu, %037_convolutional_conv_weights, %037_convolutional_conv_bias)
  return %030_convolutional, %037_convolutional
}
Checking ONNX model...
Saving ONNX file...
Done.
+ python3 onnx_to_tensorrt.py -m yolov4-tiny-416
Loading the ONNX file...
Adding yolo_layer plugins.
Adding a concatenated output as "detections".
Naming the input tensort as "input".
Building the TensorRT engine.  This would take a while...
(Use "--verbose" or "-v" to enable verbose logging.)
onnx_to_tensorrt.py:147: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
  builder.max_batch_size = MAX_BATCH_SIZE
onnx_to_tensorrt.py:149: DeprecationWarning: Use set_memory_pool_limit instead.
  config.max_workspace_size = 1 << 30
onnx_to_tensorrt.py:172: DeprecationWarning: Use build_serialized_network instead.
  engine = builder.build_engine(network, config)
[01/27/2023-17:30:04] [TRT] [W] FP16 support requested on hardware without native FP16 support, performance will be negatively affected.
[01/27/2023-17:30:15] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:15] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:15] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:15] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:15] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:15] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:15] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:15] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:15] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:16] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:16] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:16] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:16] [TRT] [W] Weights [name=005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:16] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:16] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:16] [TRT] [W] Weights [name=005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:16] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:16] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:23] [TRT] [W] Weights [name=006_convolutional + 006_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:23] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:23] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:24] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:24] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:24] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:24] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:24] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:24] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:24] [TRT] [W] Weights [name=013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:24] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:24] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:24] [TRT] [W] Weights [name=013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:24] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:24] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:24] [TRT] [W] Weights [name=014_convolutional + 014_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:24] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:24] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=016_convolutional + 016_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:25] [TRT] [W] Weights [name=021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:25] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:25] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:26] [TRT] [W] Weights [name=021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:26] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:26] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:26] [TRT] [W] Weights [name=022_convolutional + 022_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:26] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:26] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:26] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:26] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:26] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:26] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:26] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:26] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:26] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:26] [TRT] [W] Weights [name=024_convolutional + 024_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:26] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:26] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:27] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:27] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:27] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:27] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:27] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:27] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:27] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:27] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:27] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:27] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:27] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:27] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:27] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:27] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:27] [TRT] [W] Weights [name=028_convolutional + 028_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:27] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:27] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=030_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:28] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:28] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:28] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:29] [TRT] [W] Weights [name=036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:29] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:29] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:29] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:29] [TRT] [W] Weights [name=036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:29] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:29] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:30:29] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:29] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:29] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:29] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:29] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:29] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:29] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:29] [TRT] [W] Weights [name=037_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:29] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:29] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Completed creating engine.
[01/27/2023-17:30:29] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[01/27/2023-17:30:29] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Serialized the TensorRT engine to file: yolov4-tiny-416.trt
+ cp /tensorrt_demos/yolo/yolov4-tiny-416.trt /tensorrt_models/yolov4-tiny-416.trt
+ for model in ${YOLO_MODELS//,/ }
+ python3 yolo_to_onnx.py -m yolov7-tiny-416
Parsing DarkNet cfg file...
Building ONNX graph...
graph yolov7-tiny-416 (
  %000_net[FLOAT, 1x3x416x416]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
  %001_convolutional_bn_var[FLOAT, 32]
  %001_convolutional_conv_weights[FLOAT, 32x3x3x3]
  %002_convolutional_bn_scale[FLOAT, 64]
  %002_convolutional_bn_bias[FLOAT, 64]
  %002_convolutional_bn_mean[FLOAT, 64]
  %002_convolutional_bn_var[FLOAT, 64]
  %002_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %003_convolutional_bn_scale[FLOAT, 32]
  %003_convolutional_bn_bias[FLOAT, 32]
  %003_convolutional_bn_mean[FLOAT, 32]
  %003_convolutional_bn_var[FLOAT, 32]
  %003_convolutional_conv_weights[FLOAT, 32x64x1x1]
  %005_convolutional_bn_scale[FLOAT, 32]
  %005_convolutional_bn_bias[FLOAT, 32]
  %005_convolutional_bn_mean[FLOAT, 32]
  %005_convolutional_bn_var[FLOAT, 32]
  %005_convolutional_conv_weights[FLOAT, 32x64x1x1]
  %006_convolutional_bn_scale[FLOAT, 32]
  %006_convolutional_bn_bias[FLOAT, 32]
  %006_convolutional_bn_mean[FLOAT, 32]
  %006_convolutional_bn_var[FLOAT, 32]
  %006_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %007_convolutional_bn_scale[FLOAT, 32]
  %007_convolutional_bn_bias[FLOAT, 32]
  %007_convolutional_bn_mean[FLOAT, 32]
  %007_convolutional_bn_var[FLOAT, 32]
  %007_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %009_convolutional_bn_scale[FLOAT, 64]
  %009_convolutional_bn_bias[FLOAT, 64]
  %009_convolutional_bn_mean[FLOAT, 64]
  %009_convolutional_bn_var[FLOAT, 64]
  %009_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %011_convolutional_bn_scale[FLOAT, 64]
  %011_convolutional_bn_bias[FLOAT, 64]
  %011_convolutional_bn_mean[FLOAT, 64]
  %011_convolutional_bn_var[FLOAT, 64]
  %011_convolutional_conv_weights[FLOAT, 64x64x1x1]
  %013_convolutional_bn_scale[FLOAT, 64]
  %013_convolutional_bn_bias[FLOAT, 64]
  %013_convolutional_bn_mean[FLOAT, 64]
  %013_convolutional_bn_var[FLOAT, 64]
  %013_convolutional_conv_weights[FLOAT, 64x64x1x1]
  %014_convolutional_bn_scale[FLOAT, 64]
  %014_convolutional_bn_bias[FLOAT, 64]
  %014_convolutional_bn_mean[FLOAT, 64]
  %014_convolutional_bn_var[FLOAT, 64]
  %014_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %015_convolutional_bn_scale[FLOAT, 64]
  %015_convolutional_bn_bias[FLOAT, 64]
  %015_convolutional_bn_mean[FLOAT, 64]
  %015_convolutional_bn_var[FLOAT, 64]
  %015_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %017_convolutional_bn_scale[FLOAT, 128]
  %017_convolutional_bn_bias[FLOAT, 128]
  %017_convolutional_bn_mean[FLOAT, 128]
  %017_convolutional_bn_var[FLOAT, 128]
  %017_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %019_convolutional_bn_scale[FLOAT, 128]
  %019_convolutional_bn_bias[FLOAT, 128]
  %019_convolutional_bn_mean[FLOAT, 128]
  %019_convolutional_bn_var[FLOAT, 128]
  %019_convolutional_conv_weights[FLOAT, 128x128x1x1]
  %021_convolutional_bn_scale[FLOAT, 128]
  %021_convolutional_bn_bias[FLOAT, 128]
  %021_convolutional_bn_mean[FLOAT, 128]
  %021_convolutional_bn_var[FLOAT, 128]
  %021_convolutional_conv_weights[FLOAT, 128x128x1x1]
  %022_convolutional_bn_scale[FLOAT, 128]
  %022_convolutional_bn_bias[FLOAT, 128]
  %022_convolutional_bn_mean[FLOAT, 128]
  %022_convolutional_bn_var[FLOAT, 128]
  %022_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %023_convolutional_bn_scale[FLOAT, 128]
  %023_convolutional_bn_bias[FLOAT, 128]
  %023_convolutional_bn_mean[FLOAT, 128]
  %023_convolutional_bn_var[FLOAT, 128]
  %023_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %025_convolutional_bn_scale[FLOAT, 256]
  %025_convolutional_bn_bias[FLOAT, 256]
  %025_convolutional_bn_mean[FLOAT, 256]
  %025_convolutional_bn_var[FLOAT, 256]
  %025_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %027_convolutional_bn_scale[FLOAT, 256]
  %027_convolutional_bn_bias[FLOAT, 256]
  %027_convolutional_bn_mean[FLOAT, 256]
  %027_convolutional_bn_var[FLOAT, 256]
  %027_convolutional_conv_weights[FLOAT, 256x256x1x1]
  %029_convolutional_bn_scale[FLOAT, 256]
  %029_convolutional_bn_bias[FLOAT, 256]
  %029_convolutional_bn_mean[FLOAT, 256]
  %029_convolutional_bn_var[FLOAT, 256]
  %029_convolutional_conv_weights[FLOAT, 256x256x1x1]
  %030_convolutional_bn_scale[FLOAT, 256]
  %030_convolutional_bn_bias[FLOAT, 256]
  %030_convolutional_bn_mean[FLOAT, 256]
  %030_convolutional_bn_var[FLOAT, 256]
  %030_convolutional_conv_weights[FLOAT, 256x256x3x3]
  %031_convolutional_bn_scale[FLOAT, 256]
  %031_convolutional_bn_bias[FLOAT, 256]
  %031_convolutional_bn_mean[FLOAT, 256]
  %031_convolutional_bn_var[FLOAT, 256]
  %031_convolutional_conv_weights[FLOAT, 256x256x3x3]
  %033_convolutional_bn_scale[FLOAT, 512]
  %033_convolutional_bn_bias[FLOAT, 512]
  %033_convolutional_bn_mean[FLOAT, 512]
  %033_convolutional_bn_var[FLOAT, 512]
  %033_convolutional_conv_weights[FLOAT, 512x1024x1x1]
  %034_convolutional_bn_scale[FLOAT, 256]
  %034_convolutional_bn_bias[FLOAT, 256]
  %034_convolutional_bn_mean[FLOAT, 256]
  %034_convolutional_bn_var[FLOAT, 256]
  %034_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %036_convolutional_bn_scale[FLOAT, 256]
  %036_convolutional_bn_bias[FLOAT, 256]
  %036_convolutional_bn_mean[FLOAT, 256]
  %036_convolutional_bn_var[FLOAT, 256]
  %036_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %043_convolutional_bn_scale[FLOAT, 256]
  %043_convolutional_bn_bias[FLOAT, 256]
  %043_convolutional_bn_mean[FLOAT, 256]
  %043_convolutional_bn_var[FLOAT, 256]
  %043_convolutional_conv_weights[FLOAT, 256x1024x1x1]
  %045_convolutional_bn_scale[FLOAT, 256]
  %045_convolutional_bn_bias[FLOAT, 256]
  %045_convolutional_bn_mean[FLOAT, 256]
  %045_convolutional_bn_var[FLOAT, 256]
  %045_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %046_convolutional_bn_scale[FLOAT, 128]
  %046_convolutional_bn_bias[FLOAT, 128]
  %046_convolutional_bn_mean[FLOAT, 128]
  %046_convolutional_bn_var[FLOAT, 128]
  %046_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %047_upsample_scale[FLOAT, 4]
  %047_upsample_roi[FLOAT, 4]
  %049_convolutional_bn_scale[FLOAT, 128]
  %049_convolutional_bn_bias[FLOAT, 128]
  %049_convolutional_bn_mean[FLOAT, 128]
  %049_convolutional_bn_var[FLOAT, 128]
  %049_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %051_convolutional_bn_scale[FLOAT, 64]
  %051_convolutional_bn_bias[FLOAT, 64]
  %051_convolutional_bn_mean[FLOAT, 64]
  %051_convolutional_bn_var[FLOAT, 64]
  %051_convolutional_conv_weights[FLOAT, 64x256x1x1]
  %053_convolutional_bn_scale[FLOAT, 64]
  %053_convolutional_bn_bias[FLOAT, 64]
  %053_convolutional_bn_mean[FLOAT, 64]
  %053_convolutional_bn_var[FLOAT, 64]
  %053_convolutional_conv_weights[FLOAT, 64x256x1x1]
  %054_convolutional_bn_scale[FLOAT, 64]
  %054_convolutional_bn_bias[FLOAT, 64]
  %054_convolutional_bn_mean[FLOAT, 64]
  %054_convolutional_bn_var[FLOAT, 64]
  %054_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %055_convolutional_bn_scale[FLOAT, 64]
  %055_convolutional_bn_bias[FLOAT, 64]
  %055_convolutional_bn_mean[FLOAT, 64]
  %055_convolutional_bn_var[FLOAT, 64]
  %055_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %057_convolutional_bn_scale[FLOAT, 128]
  %057_convolutional_bn_bias[FLOAT, 128]
  %057_convolutional_bn_mean[FLOAT, 128]
  %057_convolutional_bn_var[FLOAT, 128]
  %057_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %058_convolutional_bn_scale[FLOAT, 64]
  %058_convolutional_bn_bias[FLOAT, 64]
  %058_convolutional_bn_mean[FLOAT, 64]
  %058_convolutional_bn_var[FLOAT, 64]
  %058_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %059_upsample_scale[FLOAT, 4]
  %059_upsample_roi[FLOAT, 4]
  %061_convolutional_bn_scale[FLOAT, 64]
  %061_convolutional_bn_bias[FLOAT, 64]
  %061_convolutional_bn_mean[FLOAT, 64]
  %061_convolutional_bn_var[FLOAT, 64]
  %061_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %063_convolutional_bn_scale[FLOAT, 32]
  %063_convolutional_bn_bias[FLOAT, 32]
  %063_convolutional_bn_mean[FLOAT, 32]
  %063_convolutional_bn_var[FLOAT, 32]
  %063_convolutional_conv_weights[FLOAT, 32x128x1x1]
  %065_convolutional_bn_scale[FLOAT, 32]
  %065_convolutional_bn_bias[FLOAT, 32]
  %065_convolutional_bn_mean[FLOAT, 32]
  %065_convolutional_bn_var[FLOAT, 32]
  %065_convolutional_conv_weights[FLOAT, 32x128x1x1]
  %066_convolutional_bn_scale[FLOAT, 32]
  %066_convolutional_bn_bias[FLOAT, 32]
  %066_convolutional_bn_mean[FLOAT, 32]
  %066_convolutional_bn_var[FLOAT, 32]
  %066_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %067_convolutional_bn_scale[FLOAT, 32]
  %067_convolutional_bn_bias[FLOAT, 32]
  %067_convolutional_bn_mean[FLOAT, 32]
  %067_convolutional_bn_var[FLOAT, 32]
  %067_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %069_convolutional_bn_scale[FLOAT, 64]
  %069_convolutional_bn_bias[FLOAT, 64]
  %069_convolutional_bn_mean[FLOAT, 64]
  %069_convolutional_bn_var[FLOAT, 64]
  %069_convolutional_conv_weights[FLOAT, 64x128x1x1]
  %070_convolutional_bn_scale[FLOAT, 128]
  %070_convolutional_bn_bias[FLOAT, 128]
  %070_convolutional_bn_mean[FLOAT, 128]
  %070_convolutional_bn_var[FLOAT, 128]
  %070_convolutional_conv_weights[FLOAT, 128x64x3x3]
  %072_convolutional_bn_scale[FLOAT, 64]
  %072_convolutional_bn_bias[FLOAT, 64]
  %072_convolutional_bn_mean[FLOAT, 64]
  %072_convolutional_bn_var[FLOAT, 64]
  %072_convolutional_conv_weights[FLOAT, 64x256x1x1]
  %074_convolutional_bn_scale[FLOAT, 64]
  %074_convolutional_bn_bias[FLOAT, 64]
  %074_convolutional_bn_mean[FLOAT, 64]
  %074_convolutional_bn_var[FLOAT, 64]
  %074_convolutional_conv_weights[FLOAT, 64x256x1x1]
  %075_convolutional_bn_scale[FLOAT, 64]
  %075_convolutional_bn_bias[FLOAT, 64]
  %075_convolutional_bn_mean[FLOAT, 64]
  %075_convolutional_bn_var[FLOAT, 64]
  %075_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %076_convolutional_bn_scale[FLOAT, 64]
  %076_convolutional_bn_bias[FLOAT, 64]
  %076_convolutional_bn_mean[FLOAT, 64]
  %076_convolutional_bn_var[FLOAT, 64]
  %076_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %078_convolutional_bn_scale[FLOAT, 128]
  %078_convolutional_bn_bias[FLOAT, 128]
  %078_convolutional_bn_mean[FLOAT, 128]
  %078_convolutional_bn_var[FLOAT, 128]
  %078_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %079_convolutional_bn_scale[FLOAT, 256]
  %079_convolutional_bn_bias[FLOAT, 256]
  %079_convolutional_bn_mean[FLOAT, 256]
  %079_convolutional_bn_var[FLOAT, 256]
  %079_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %081_convolutional_bn_scale[FLOAT, 128]
  %081_convolutional_bn_bias[FLOAT, 128]
  %081_convolutional_bn_mean[FLOAT, 128]
  %081_convolutional_bn_var[FLOAT, 128]
  %081_convolutional_conv_weights[FLOAT, 128x512x1x1]
  %083_convolutional_bn_scale[FLOAT, 128]
  %083_convolutional_bn_bias[FLOAT, 128]
  %083_convolutional_bn_mean[FLOAT, 128]
  %083_convolutional_bn_var[FLOAT, 128]
  %083_convolutional_conv_weights[FLOAT, 128x512x1x1]
  %084_convolutional_bn_scale[FLOAT, 128]
  %084_convolutional_bn_bias[FLOAT, 128]
  %084_convolutional_bn_mean[FLOAT, 128]
  %084_convolutional_bn_var[FLOAT, 128]
  %084_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %085_convolutional_bn_scale[FLOAT, 128]
  %085_convolutional_bn_bias[FLOAT, 128]
  %085_convolutional_bn_mean[FLOAT, 128]
  %085_convolutional_bn_var[FLOAT, 128]
  %085_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %087_convolutional_bn_scale[FLOAT, 256]
  %087_convolutional_bn_bias[FLOAT, 256]
  %087_convolutional_bn_mean[FLOAT, 256]
  %087_convolutional_bn_var[FLOAT, 256]
  %087_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %089_convolutional_bn_scale[FLOAT, 128]
  %089_convolutional_bn_bias[FLOAT, 128]
  %089_convolutional_bn_mean[FLOAT, 128]
  %089_convolutional_bn_var[FLOAT, 128]
  %089_convolutional_conv_weights[FLOAT, 128x64x3x3]
  %090_convolutional_conv_bias[FLOAT, 255]
  %090_convolutional_conv_weights[FLOAT, 255x128x1x1]
  %093_convolutional_bn_scale[FLOAT, 256]
  %093_convolutional_bn_bias[FLOAT, 256]
  %093_convolutional_bn_mean[FLOAT, 256]
  %093_convolutional_bn_var[FLOAT, 256]
  %093_convolutional_conv_weights[FLOAT, 256x128x3x3]
  %094_convolutional_conv_bias[FLOAT, 255]
  %094_convolutional_conv_weights[FLOAT, 255x256x1x1]
  %097_convolutional_bn_scale[FLOAT, 512]
  %097_convolutional_bn_bias[FLOAT, 512]
  %097_convolutional_bn_mean[FLOAT, 512]
  %097_convolutional_bn_var[FLOAT, 512]
  %097_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %098_convolutional_conv_bias[FLOAT, 255]
  %098_convolutional_conv_weights[FLOAT, 255x512x1x1]
) {
  %001_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%000_net, %001_convolutional_conv_weights)
  %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var)
  %001_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%001_convolutional_bn)
  %002_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%001_convolutional_lrelu, %002_convolutional_conv_weights)
  %002_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%002_convolutional, %002_convolutional_bn_scale, %002_convolutional_bn_bias, %002_convolutional_bn_mean, %002_convolutional_bn_var)
  %002_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%002_convolutional_bn)
  %003_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%002_convolutional_lrelu, %003_convolutional_conv_weights)
  %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var)
  %003_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%003_convolutional_bn)
  %005_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%002_convolutional_lrelu, %005_convolutional_conv_weights)
  %005_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%005_convolutional, %005_convolutional_bn_scale, %005_convolutional_bn_bias, %005_convolutional_bn_mean, %005_convolutional_bn_var)
  %005_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%005_convolutional_bn)
  %006_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%005_convolutional_lrelu, %006_convolutional_conv_weights)
  %006_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%006_convolutional, %006_convolutional_bn_scale, %006_convolutional_bn_bias, %006_convolutional_bn_mean, %006_convolutional_bn_var)
  %006_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%006_convolutional_bn)
  %007_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%006_convolutional_lrelu, %007_convolutional_conv_weights)
  %007_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%007_convolutional, %007_convolutional_bn_scale, %007_convolutional_bn_bias, %007_convolutional_bn_mean, %007_convolutional_bn_var)
  %007_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%007_convolutional_bn)
  %008_route = Concat[axis = 1](%003_convolutional_lrelu, %005_convolutional_lrelu, %006_convolutional_lrelu, %007_convolutional_lrelu)
  %009_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%008_route, %009_convolutional_conv_weights)
  %009_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%009_convolutional, %009_convolutional_bn_scale, %009_convolutional_bn_bias, %009_convolutional_bn_mean, %009_convolutional_bn_var)
  %009_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%009_convolutional_bn)
  %010_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%009_convolutional_lrelu)
  %011_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%010_maxpool, %011_convolutional_conv_weights)
  %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var)
  %011_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%011_convolutional_bn)
  %013_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%010_maxpool, %013_convolutional_conv_weights)
  %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var)
  %013_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%013_convolutional_bn)
  %014_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights)
  %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var)
  %014_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%014_convolutional_bn)
  %015_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%014_convolutional_lrelu, %015_convolutional_conv_weights)
  %015_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%015_convolutional, %015_convolutional_bn_scale, %015_convolutional_bn_bias, %015_convolutional_bn_mean, %015_convolutional_bn_var)
  %015_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%015_convolutional_bn)
  %016_route = Concat[axis = 1](%011_convolutional_lrelu, %013_convolutional_lrelu, %014_convolutional_lrelu, %015_convolutional_lrelu)
  %017_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%016_route, %017_convolutional_conv_weights)
  %017_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%017_convolutional, %017_convolutional_bn_scale, %017_convolutional_bn_bias, %017_convolutional_bn_mean, %017_convolutional_bn_var)
  %017_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%017_convolutional_bn)
  %018_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%017_convolutional_lrelu)
  %019_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%018_maxpool, %019_convolutional_conv_weights)
  %019_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%019_convolutional, %019_convolutional_bn_scale, %019_convolutional_bn_bias, %019_convolutional_bn_mean, %019_convolutional_bn_var)
  %019_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%019_convolutional_bn)
  %021_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%018_maxpool, %021_convolutional_conv_weights)
  %021_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%021_convolutional, %021_convolutional_bn_scale, %021_convolutional_bn_bias, %021_convolutional_bn_mean, %021_convolutional_bn_var)
  %021_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%021_convolutional_bn)
  %022_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%021_convolutional_lrelu, %022_convolutional_conv_weights)
  %022_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%022_convolutional, %022_convolutional_bn_scale, %022_convolutional_bn_bias, %022_convolutional_bn_mean, %022_convolutional_bn_var)
  %022_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%022_convolutional_bn)
  %023_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%022_convolutional_lrelu, %023_convolutional_conv_weights)
  %023_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%023_convolutional, %023_convolutional_bn_scale, %023_convolutional_bn_bias, %023_convolutional_bn_mean, %023_convolutional_bn_var)
  %023_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%023_convolutional_bn)
  %024_route = Concat[axis = 1](%019_convolutional_lrelu, %021_convolutional_lrelu, %022_convolutional_lrelu, %023_convolutional_lrelu)
  %025_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%024_route, %025_convolutional_conv_weights)
  %025_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%025_convolutional, %025_convolutional_bn_scale, %025_convolutional_bn_bias, %025_convolutional_bn_mean, %025_convolutional_bn_var)
  %025_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%025_convolutional_bn)
  %026_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%025_convolutional_lrelu)
  %027_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%026_maxpool, %027_convolutional_conv_weights)
  %027_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%027_convolutional, %027_convolutional_bn_scale, %027_convolutional_bn_bias, %027_convolutional_bn_mean, %027_convolutional_bn_var)
  %027_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%027_convolutional_bn)
  %029_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%026_maxpool, %029_convolutional_conv_weights)
  %029_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%029_convolutional, %029_convolutional_bn_scale, %029_convolutional_bn_bias, %029_convolutional_bn_mean, %029_convolutional_bn_var)
  %029_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%029_convolutional_bn)
  %030_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%029_convolutional_lrelu, %030_convolutional_conv_weights)
  %030_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%030_convolutional, %030_convolutional_bn_scale, %030_convolutional_bn_bias, %030_convolutional_bn_mean, %030_convolutional_bn_var)
  %030_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%030_convolutional_bn)
  %031_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%030_convolutional_lrelu, %031_convolutional_conv_weights)
  %031_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%031_convolutional, %031_convolutional_bn_scale, %031_convolutional_bn_bias, %031_convolutional_bn_mean, %031_convolutional_bn_var)
  %031_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%031_convolutional_bn)
  %032_route = Concat[axis = 1](%027_convolutional_lrelu, %029_convolutional_lrelu, %030_convolutional_lrelu, %031_convolutional_lrelu)
  %033_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%032_route, %033_convolutional_conv_weights)
  %033_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%033_convolutional, %033_convolutional_bn_scale, %033_convolutional_bn_bias, %033_convolutional_bn_mean, %033_convolutional_bn_var)
  %033_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%033_convolutional_bn)
  %034_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%033_convolutional_lrelu, %034_convolutional_conv_weights)
  %034_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%034_convolutional, %034_convolutional_bn_scale, %034_convolutional_bn_bias, %034_convolutional_bn_mean, %034_convolutional_bn_var)
  %034_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%034_convolutional_bn)
  %036_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%033_convolutional_lrelu, %036_convolutional_conv_weights)
  %036_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%036_convolutional, %036_convolutional_bn_scale, %036_convolutional_bn_bias, %036_convolutional_bn_mean, %036_convolutional_bn_var)
  %036_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%036_convolutional_bn)
  %037_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [5, 5], strides = [1, 1]](%036_convolutional_lrelu)
  %039_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [9, 9], strides = [1, 1]](%036_convolutional_lrelu)
  %041_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [13, 13], strides = [1, 1]](%036_convolutional_lrelu)
  %042_route = Concat[axis = 1](%041_maxpool, %039_maxpool, %037_maxpool, %036_convolutional_lrelu)
  %043_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%042_route, %043_convolutional_conv_weights)
  %043_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%043_convolutional, %043_convolutional_bn_scale, %043_convolutional_bn_bias, %043_convolutional_bn_mean, %043_convolutional_bn_var)
  %043_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%043_convolutional_bn)
  %044_route = Concat[axis = 1](%034_convolutional_lrelu, %043_convolutional_lrelu)
  %045_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%044_route, %045_convolutional_conv_weights)
  %045_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%045_convolutional, %045_convolutional_bn_scale, %045_convolutional_bn_bias, %045_convolutional_bn_mean, %045_convolutional_bn_var)
  %045_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%045_convolutional_bn)
  %046_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%045_convolutional_lrelu, %046_convolutional_conv_weights)
  %046_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%046_convolutional, %046_convolutional_bn_scale, %046_convolutional_bn_bias, %046_convolutional_bn_mean, %046_convolutional_bn_var)
  %046_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%046_convolutional_bn)
  %047_upsample = Resize[coordinate_transformation_mode = 'asymmetric', mode = 'nearest', nearest_mode = 'floor'](%046_convolutional_lrelu, %047_upsample_roi, %047_upsample_scale)
  %049_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%025_convolutional_lrelu, %049_convolutional_conv_weights)
  %049_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%049_convolutional, %049_convolutional_bn_scale, %049_convolutional_bn_bias, %049_convolutional_bn_mean, %049_convolutional_bn_var)
  %049_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%049_convolutional_bn)
  %050_route = Concat[axis = 1](%049_convolutional_lrelu, %047_upsample)
  %051_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%050_route, %051_convolutional_conv_weights)
  %051_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%051_convolutional, %051_convolutional_bn_scale, %051_convolutional_bn_bias, %051_convolutional_bn_mean, %051_convolutional_bn_var)
  %051_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%051_convolutional_bn)
  %053_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%050_route, %053_convolutional_conv_weights)
  %053_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%053_convolutional, %053_convolutional_bn_scale, %053_convolutional_bn_bias, %053_convolutional_bn_mean, %053_convolutional_bn_var)
  %053_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%053_convolutional_bn)
  %054_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%053_convolutional_lrelu, %054_convolutional_conv_weights)
  %054_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%054_convolutional, %054_convolutional_bn_scale, %054_convolutional_bn_bias, %054_convolutional_bn_mean, %054_convolutional_bn_var)
  %054_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%054_convolutional_bn)
  %055_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%054_convolutional_lrelu, %055_convolutional_conv_weights)
  %055_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%055_convolutional, %055_convolutional_bn_scale, %055_convolutional_bn_bias, %055_convolutional_bn_mean, %055_convolutional_bn_var)
  %055_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%055_convolutional_bn)
  %056_route = Concat[axis = 1](%051_convolutional_lrelu, %053_convolutional_lrelu, %054_convolutional_lrelu, %055_convolutional_lrelu)
  %057_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%056_route, %057_convolutional_conv_weights)
  %057_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%057_convolutional, %057_convolutional_bn_scale, %057_convolutional_bn_bias, %057_convolutional_bn_mean, %057_convolutional_bn_var)
  %057_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%057_convolutional_bn)
  %058_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%057_convolutional_lrelu, %058_convolutional_conv_weights)
  %058_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%058_convolutional, %058_convolutional_bn_scale, %058_convolutional_bn_bias, %058_convolutional_bn_mean, %058_convolutional_bn_var)
  %058_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%058_convolutional_bn)
  %059_upsample = Resize[coordinate_transformation_mode = 'asymmetric', mode = 'nearest', nearest_mode = 'floor'](%058_convolutional_lrelu, %059_upsample_roi, %059_upsample_scale)
  %061_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%017_convolutional_lrelu, %061_convolutional_conv_weights)
  %061_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%061_convolutional, %061_convolutional_bn_scale, %061_convolutional_bn_bias, %061_convolutional_bn_mean, %061_convolutional_bn_var)
  %061_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%061_convolutional_bn)
  %062_route = Concat[axis = 1](%061_convolutional_lrelu, %059_upsample)
  %063_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%062_route, %063_convolutional_conv_weights)
  %063_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%063_convolutional, %063_convolutional_bn_scale, %063_convolutional_bn_bias, %063_convolutional_bn_mean, %063_convolutional_bn_var)
  %063_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%063_convolutional_bn)
  %065_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%062_route, %065_convolutional_conv_weights)
  %065_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%065_convolutional, %065_convolutional_bn_scale, %065_convolutional_bn_bias, %065_convolutional_bn_mean, %065_convolutional_bn_var)
  %065_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%065_convolutional_bn)
  %066_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%065_convolutional_lrelu, %066_convolutional_conv_weights)
  %066_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%066_convolutional, %066_convolutional_bn_scale, %066_convolutional_bn_bias, %066_convolutional_bn_mean, %066_convolutional_bn_var)
  %066_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%066_convolutional_bn)
  %067_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%066_convolutional_lrelu, %067_convolutional_conv_weights)
  %067_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%067_convolutional, %067_convolutional_bn_scale, %067_convolutional_bn_bias, %067_convolutional_bn_mean, %067_convolutional_bn_var)
  %067_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%067_convolutional_bn)
  %068_route = Concat[axis = 1](%063_convolutional_lrelu, %065_convolutional_lrelu, %066_convolutional_lrelu, %067_convolutional_lrelu)
  %069_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%068_route, %069_convolutional_conv_weights)
  %069_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%069_convolutional, %069_convolutional_bn_scale, %069_convolutional_bn_bias, %069_convolutional_bn_mean, %069_convolutional_bn_var)
  %069_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%069_convolutional_bn)
  %070_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%069_convolutional_lrelu, %070_convolutional_conv_weights)
  %070_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%070_convolutional, %070_convolutional_bn_scale, %070_convolutional_bn_bias, %070_convolutional_bn_mean, %070_convolutional_bn_var)
  %070_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%070_convolutional_bn)
  %071_route = Concat[axis = 1](%070_convolutional_lrelu, %057_convolutional_lrelu)
  %072_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%071_route, %072_convolutional_conv_weights)
  %072_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%072_convolutional, %072_convolutional_bn_scale, %072_convolutional_bn_bias, %072_convolutional_bn_mean, %072_convolutional_bn_var)
  %072_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%072_convolutional_bn)
  %074_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%071_route, %074_convolutional_conv_weights)
  %074_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%074_convolutional, %074_convolutional_bn_scale, %074_convolutional_bn_bias, %074_convolutional_bn_mean, %074_convolutional_bn_var)
  %074_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%074_convolutional_bn)
  %075_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%074_convolutional_lrelu, %075_convolutional_conv_weights)
  %075_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%075_convolutional, %075_convolutional_bn_scale, %075_convolutional_bn_bias, %075_convolutional_bn_mean, %075_convolutional_bn_var)
  %075_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%075_convolutional_bn)
  %076_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%075_convolutional_lrelu, %076_convolutional_conv_weights)
  %076_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%076_convolutional, %076_convolutional_bn_scale, %076_convolutional_bn_bias, %076_convolutional_bn_mean, %076_convolutional_bn_var)
  %076_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%076_convolutional_bn)
  %077_route = Concat[axis = 1](%072_convolutional_lrelu, %074_convolutional_lrelu, %075_convolutional_lrelu, %076_convolutional_lrelu)
  %078_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%077_route, %078_convolutional_conv_weights)
  %078_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%078_convolutional, %078_convolutional_bn_scale, %078_convolutional_bn_bias, %078_convolutional_bn_mean, %078_convolutional_bn_var)
  %078_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%078_convolutional_bn)
  %079_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%078_convolutional_lrelu, %079_convolutional_conv_weights)
  %079_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%079_convolutional, %079_convolutional_bn_scale, %079_convolutional_bn_bias, %079_convolutional_bn_mean, %079_convolutional_bn_var)
  %079_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%079_convolutional_bn)
  %080_route = Concat[axis = 1](%079_convolutional_lrelu, %045_convolutional_lrelu)
  %081_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%080_route, %081_convolutional_conv_weights)
  %081_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%081_convolutional, %081_convolutional_bn_scale, %081_convolutional_bn_bias, %081_convolutional_bn_mean, %081_convolutional_bn_var)
  %081_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%081_convolutional_bn)
  %083_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%080_route, %083_convolutional_conv_weights)
  %083_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%083_convolutional, %083_convolutional_bn_scale, %083_convolutional_bn_bias, %083_convolutional_bn_mean, %083_convolutional_bn_var)
  %083_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%083_convolutional_bn)
  %084_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%083_convolutional_lrelu, %084_convolutional_conv_weights)
  %084_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%084_convolutional, %084_convolutional_bn_scale, %084_convolutional_bn_bias, %084_convolutional_bn_mean, %084_convolutional_bn_var)
  %084_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%084_convolutional_bn)
  %085_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%084_convolutional_lrelu, %085_convolutional_conv_weights)
  %085_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%085_convolutional, %085_convolutional_bn_scale, %085_convolutional_bn_bias, %085_convolutional_bn_mean, %085_convolutional_bn_var)
  %085_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%085_convolutional_bn)
  %086_route = Concat[axis = 1](%081_convolutional_lrelu, %083_convolutional_lrelu, %084_convolutional_lrelu, %085_convolutional_lrelu)
  %087_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%086_route, %087_convolutional_conv_weights)
  %087_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%087_convolutional, %087_convolutional_bn_scale, %087_convolutional_bn_bias, %087_convolutional_bn_mean, %087_convolutional_bn_var)
  %087_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%087_convolutional_bn)
  %089_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%069_convolutional_lrelu, %089_convolutional_conv_weights)
  %089_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%089_convolutional, %089_convolutional_bn_scale, %089_convolutional_bn_bias, %089_convolutional_bn_mean, %089_convolutional_bn_var)
  %089_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%089_convolutional_bn)
  %090_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%089_convolutional_lrelu, %090_convolutional_conv_weights, %090_convolutional_conv_bias)
  %090_convolutional_lgx = Sigmoid(%090_convolutional)
  %093_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%078_convolutional_lrelu, %093_convolutional_conv_weights)
  %093_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%093_convolutional, %093_convolutional_bn_scale, %093_convolutional_bn_bias, %093_convolutional_bn_mean, %093_convolutional_bn_var)
  %093_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%093_convolutional_bn)
  %094_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%093_convolutional_lrelu, %094_convolutional_conv_weights, %094_convolutional_conv_bias)
  %094_convolutional_lgx = Sigmoid(%094_convolutional)
  %097_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%087_convolutional_lrelu, %097_convolutional_conv_weights)
  %097_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%097_convolutional, %097_convolutional_bn_scale, %097_convolutional_bn_bias, %097_convolutional_bn_mean, %097_convolutional_bn_var)
  %097_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%097_convolutional_bn)
  %098_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%097_convolutional_lrelu, %098_convolutional_conv_weights, %098_convolutional_conv_bias)
  %098_convolutional_lgx = Sigmoid(%098_convolutional)
  return %090_convolutional_lgx, %094_convolutional_lgx, %098_convolutional_lgx
}
Checking ONNX model...
Saving ONNX file...
Done.
+ python3 onnx_to_tensorrt.py -m yolov7-tiny-416
Loading the ONNX file...
Adding yolo_layer plugins.
Adding a concatenated output as "detections".
Naming the input tensort as "input".
Building the TensorRT engine.  This would take a while...
(Use "--verbose" or "-v" to enable verbose logging.)
onnx_to_tensorrt.py:147: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
  builder.max_batch_size = MAX_BATCH_SIZE
onnx_to_tensorrt.py:149: DeprecationWarning: Use set_memory_pool_limit instead.
  config.max_workspace_size = 1 << 30
onnx_to_tensorrt.py:172: DeprecationWarning: Use build_serialized_network instead.
  engine = builder.build_engine(network, config)
[01/27/2023-17:30:33] [TRT] [W] FP16 support requested on hardware without native FP16 support, performance will be negatively affected.
[01/27/2023-17:30:46] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:46] [TRT] [W] Weights [name=002_convolutional + 002_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:46] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn || 005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:46] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn || 005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:46] [TRT] [W] Weights [name=003_convolutional + 003_convolutional_bn || 005_convolutional + 005_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:46] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:46] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=006_convolutional + 006_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=006_convolutional + 006_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=007_convolutional + 007_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=009_convolutional + 009_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=009_convolutional + 009_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=009_convolutional + 009_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:54] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn || 013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:54] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:54] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn || 013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=011_convolutional + 011_convolutional_bn || 013_convolutional + 013_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=014_convolutional + 014_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=014_convolutional + 014_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=015_convolutional + 015_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=017_convolutional + 017_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=017_convolutional + 017_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:55] [TRT] [W] Weights [name=017_convolutional + 017_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:55] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:55] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn || 021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn || 021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=019_convolutional + 019_convolutional_bn || 021_convolutional + 021_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=022_convolutional + 022_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=022_convolutional + 022_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=023_convolutional + 023_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=025_convolutional + 025_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=025_convolutional + 025_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:56] [TRT] [W] Weights [name=025_convolutional + 025_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:56] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:56] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn || 029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn || 029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=027_convolutional + 027_convolutional_bn || 029_convolutional + 029_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=030_convolutional + 030_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=030_convolutional + 030_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:57] [TRT] [W] Weights [name=031_convolutional + 031_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:57] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:57] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.bias] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.bias] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=033_convolutional + 033_convolutional_bn.bias] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=034_convolutional + 034_convolutional_bn || 036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=034_convolutional + 034_convolutional_bn || 036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:58] [TRT] [W] Weights [name=034_convolutional + 034_convolutional_bn || 036_convolutional + 036_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:58] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:58] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:59] [TRT] [W] Weights [name=043_convolutional + 043_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:59] [TRT] [W] Weights [name=043_convolutional + 043_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:30:59] [TRT] [W] Weights [name=043_convolutional + 043_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:30:59] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:30:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=045_convolutional + 045_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=045_convolutional + 045_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=045_convolutional + 045_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=046_convolutional + 046_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=046_convolutional + 046_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=046_convolutional + 046_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=049_convolutional + 049_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=049_convolutional + 049_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:00] [TRT] [W] Weights [name=049_convolutional + 049_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:00] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:00] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=051_convolutional + 051_convolutional_bn || 053_convolutional + 053_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=054_convolutional + 054_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=054_convolutional + 054_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=055_convolutional + 055_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=057_convolutional + 057_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=058_convolutional + 058_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=058_convolutional + 058_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:01] [TRT] [W] Weights [name=058_convolutional + 058_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:01] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:01] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=061_convolutional + 061_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=061_convolutional + 061_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=061_convolutional + 061_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=063_convolutional + 063_convolutional_bn || 065_convolutional + 065_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=066_convolutional + 066_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=066_convolutional + 066_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=067_convolutional + 067_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=069_convolutional + 069_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=070_convolutional + 070_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:02] [TRT] [W] Weights [name=070_convolutional + 070_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:02] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:02] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=072_convolutional + 072_convolutional_bn || 074_convolutional + 074_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=075_convolutional + 075_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=076_convolutional + 076_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=078_convolutional + 078_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=079_convolutional + 079_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=079_convolutional + 079_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=081_convolutional + 081_convolutional_bn || 083_convolutional + 083_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=084_convolutional + 084_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=084_convolutional + 084_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=085_convolutional + 085_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:03] [TRT] [W] Weights [name=087_convolutional + 087_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:03] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:03] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:04] [TRT] [W] Weights [name=089_convolutional + 089_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:04] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:04] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:04] [TRT] [W] Weights [name=089_convolutional + 089_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:04] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:04] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:04] [TRT] [W] Weights [name=090_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:04] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:04] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:04] [TRT] [W] Weights [name=090_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:04] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:04] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:04] [TRT] [W] Weights [name=090_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:04] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:04] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=093_convolutional + 093_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=093_convolutional + 093_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=094_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=094_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=094_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=097_convolutional + 097_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:12] [TRT] [W] Weights [name=097_convolutional + 097_convolutional_bn.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:12] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:12] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:13] [TRT] [W] Weights [name=098_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:13] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:13] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:31:13] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:13] [TRT] [W] Weights [name=098_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:13] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:13] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:31:13] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
[01/27/2023-17:31:13] [TRT] [W] Weights [name=098_convolutional.weight] had the following issues when converted to FP16:
[01/27/2023-17:31:13] [TRT] [W]  - Subnormal FP16 values detected.
[01/27/2023-17:31:13] [TRT] [W]  - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
[01/27/2023-17:31:13] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Completed creating engine.
[01/27/2023-17:31:13] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[01/27/2023-17:31:13] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Serialized the TensorRT engine to file: yolov7-tiny-416.trt
+ cp /tensorrt_demos/yolo/yolov7-tiny-416.trt /tensorrt_models/yolov7-tiny-416.trt
root@liquidgpu:~#

bughattiveyron avatar Jan 27 '23 17:01 bughattiveyron

Im not seeing any data in my trt-models folder

Because you didn't enter it correctly

-v `pwd`/volume2/trtmodels so the folder is wrong

NickM-27 avatar Jan 27 '23 17:01 NickM-27

so I didnt realize I wasn't allowed to change the folder location

bughattiveyron avatar Jan 27 '23 17:01 bughattiveyron

so I didnt realize I wasn't allowed to change the folder location

Of course you are allowed to but then you need to make the subsequent change to the other volume map as well.

NickM-27 avatar Jan 27 '23 17:01 NickM-27

I just moved the models generated to my folder I created in volume2/trtmodels

2023-01-27 17:43:55.985589455  [2023-01-27 17:43:55] frigate.app                    INFO    : Starting Frigate (0.12.0-edbdbb7)
2023-01-27 17:43:56.016276767  [2023-01-27 17:43:56] peewee_migrate                 INFO    : Starting migrations
2023-01-27 17:43:56.044510321  [2023-01-27 17:43:56] peewee_migrate                 INFO    : There is nothing to migrate
2023-01-27 17:43:56.051767742  [2023-01-27 17:43:56] ws4py                          INFO    : Using epoll
2023-01-27 17:43:56.077632559  [2023-01-27 17:43:56] frigate.app                    INFO    : Output process started: 278
2023-01-27 17:43:56.092120619  [2023-01-27 17:43:56] ws4py                          INFO    : Using epoll
2023-01-27 17:43:56.092662232  [2023-01-27 17:43:56] frigate.app                    INFO    : Camera processor started for Front_Garage: 284
2023-01-27 17:43:56.098792527  [2023-01-27 17:43:56] frigate.app                    INFO    : Camera processor started for Back_Alley: 287
2023-01-27 17:43:56.108183316  [2023-01-27 17:43:56] frigate.app                    INFO    : Capture process started for Front_Garage: 288
2023-01-27 17:43:56.115850246  [2023-01-27 17:43:56] frigate.app                    INFO    : Capture process started for Back_Alley: 290
2023-01-27 17:43:56.202721868  [2023-01-27 17:43:56] detector.tensorrt              INFO    : Starting detection process: 277
2023-01-27 17:43:56.960616555  [2023-01-27 17:43:56] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] Init CUDA: CPU +189, GPU +0, now: CPU 243, GPU 194 (MiB)
2023-01-27 17:43:56.976570966  [2023-01-27 17:43:56] frigate.detectors.plugins.tensorrt INFO    : Loaded engine size: 34 MiB
2023-01-27 17:43:56.978131403  [2023-01-27 17:43:56] frigate.detectors.plugins.tensorrt WARNING : Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
2023-01-27 17:43:58.031384012  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +193, GPU +304, now: CPU 499, GPU 534 (MiB)
2023-01-27 17:43:58.286059517  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] Init cuDNN: CPU +111, GPU +46, now: CPU 610, GPU 580 (MiB)
2023-01-27 17:43:58.296731006  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +34, now: CPU 0, GPU 34 (MiB)
2023-01-27 17:43:58.296874721  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 575, GPU 572 (MiB)
2023-01-27 17:43:58.296938827  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 575, GPU 580 (MiB)
2023-01-27 17:43:58.297043356  [2023-01-27 17:43:58] frigate.detectors.plugins.tensorrt INFO    : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +12, now: CPU 0, GPU 46 (MiB)
2023-01-27 17:44:12.205042559  [2023-01-27 17:44:12] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:48132]

bughattiveyron avatar Jan 27 '23 17:01 bughattiveyron

Looks like it's working as expected

NickM-27 avatar Jan 27 '23 17:01 NickM-27

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Feb 27 '23 00:02 github-actions[bot]