yolov7 icon indicating copy to clipboard operation
yolov7 copied to clipboard

Inference speed with Nvidia Jetson AGX Orin

Open Dhiaeddine-Oussayed opened this issue 2 years ago • 2 comments

I compared the inference time of the yolov7 vs yolov5 on the AWS (with Nvidia V100 16Gb), with my test case it was 60ms for yolov5 vs 15ms for yolov7 which is so Great. But when I've ran the test locally on a Nvidia Jetson AGX Orin 32Gb both models ran at the exact same inference time which was 20ms. Any idea why I'm not having a difference between the two models locally ?

Dhiaeddine-Oussayed avatar Aug 04 '22 10:08 Dhiaeddine-Oussayed

Which backend dou you use? TensorRT or ONNXruntime? FP16 or FP32? What's the version of your backend?

triple-Mu avatar Aug 05 '22 12:08 triple-Mu

@triple-Mu Am using Pytorch 1.8 FP16 no TensorRT nor ONNXruntime

Dhiaeddine-Oussayed avatar Aug 05 '22 14:08 Dhiaeddine-Oussayed