Philipp Schmidt

Results 31 comments of Philipp Schmidt

When exported with --grid: `python models/export.py --weights yolov7.pt --grid` Building the TensorRT engine fails: ``` root@3aa30b614471:/workspace/yolov7# python deploy/TensoorRT/onnx_to_tensorrt.py --onnx yolov7.onnx --fp16 --explicit-batch -o yolov7.engine Namespace(calibration_batch_size=128, calibration_cache='calibration.cache', calibration_data=None, debug=False, explicit_batch=True, explicit_precision=False,...

I have the same issue when using trtexec for conversion, so this is definitely a TensorRT / ONNX issue. Here: #66

Yes it was the pytorch version. I also had to run onnx-simplify, otherwise TensorRT had issues with a few resize operations. Looking forward to try your implementation.

Quickly scanned the code and it looks really good! A few questions / remarks: 1) you use yolov7.cache for INT8, how do you put that together? Still a todo? Actually...

@albertfaromatics How do you test FPS and mAP? There is very little chance that your TensorRT engine is slower than pytorch directly. Especially on Jetson.

Try to run your engine with trtexec instead, it will give you a very good indication of actual compute latency. Last few steps of this: https://github.com/isarsoft/yolov4-triton-tensorrt#build-tensorrt-engine

I don't think that it comes prebuilt in the Linux 4 Tegra TensorRT docker images for jetson though.

Here are all fixes I made so far: https://github.com/isarsoft/yolov4-triton-tensorrt/commits/master/clients/python/processing.py

I will prob. have time this weekend to crosscheck implementations. I will get back at you when I have more info.