Tracking-Solov2-Deepsort
Tracking-Solov2-Deepsort copied to clipboard
Tensrort bin file in confile.yaml: serialize_path
In config.yaml
, we are supposed to provide a serialize_path
for tensorrt model bin file. How do we get that file? As far as I can see, Solvov2 only gives an ONNX model. Is there a script to convert onnx to tensorrt bin?
see https://github.com/chenjianqu/Solov2-TensorRT-CPP/readme.md
, in step 3, run: ./build/build_model ./config/config.yaml
, it will transform the ONNX model to TensorRT model.
Thank you for your response. However, when I run build_model, it gets stuck after buildingEngineWithConfig
after thorwing Detected invalid timing cache, setup a local cache instead
and then gets killed after a few minutes.
config_file: ./config/config.yaml
SEGMENTOR_LOG_PATH:./segmentor_log.txt
SEGMENTOR_LOG_LEVEL:debug
SEGMENTOR_LOG_FLUSH:debug
createInferBuilder
[05/07/2022-16:00:40] [I] [TRT] [MemUsageChange] Init CUDA: CPU +301, GPU +0, now: CPU 303, GPU 236 (MiB)
createNetwork
createBuilderConfig
createParser
parseFromFile:/home/usman/solo-deepsort/SOLOv2.tensorRT/deploy/weights/SOLOV2_R50_FPN_1x.onnx
[05/07/2022-16:00:40] [I] [TRT] ----------------------------------------------------------------
[05/07/2022-16:00:40] [I] [TRT] Input filename: /home/usman/solo-deepsort/SOLOv2.tensorRT/deploy/weights/SOLOV2_R50_FPN_1x.onnx
05/07/2022-16:00:40] [I] [TRT] ONNX IR version: 0.0.4
[05/07/2022-16:00:40] [I] [TRT] Opset version: 11
[05/07/2022-16:00:40] [I] [TRT] Producer name: pytorch
[05/07/2022-16:00:40] [I] [TRT] Producer version: 1.3
[05/07/2022-16:00:40] [I] [TRT] Domain:
[05/07/2022-16:00:40] [I] [TRT] Model version: 0
[05/07/2022-16:00:40] [I] [TRT] Doc string:
[05/07/2022-16:00:40] [I] [TRT] ----------------------------------------------------------------
[05/07/2022-16:00:40] [W] [TRT] /home/usman/solo-deepsort/onnx-tensorrt/onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
input shape:input (1, 3, 384, 1152)
output shape:cate_pred (3872, 80)
enableDLA
buildEngineWithConfig
[05/07/2022-16:00:41] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 740 MiB, GPU 792 MiB
[05/07/2022-16:00:42] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +69, GPU +68, now: CPU 911, GPU 1327 (MiB)
[05/07/2022-16:00:42] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 912, GPU 1337 (MiB)
[05/07/2022-16:00:42] [W] [TRT] Detected invalid timing cache, setup a local cache instead
Killed
Earlier, it was giving an error about workspace which I fixed by doing config->setMaxWorkspaceSize(10_GiB);