yolov7
yolov7 copied to clipboard
Inference speed with Nvidia Jetson AGX Orin
I compared the inference time of the yolov7 vs yolov5 on the AWS (with Nvidia V100 16Gb), with my test case it was 60ms for yolov5 vs 15ms for yolov7 which is so Great. But when I've ran the test locally on a Nvidia Jetson AGX Orin 32Gb both models ran at the exact same inference time which was 20ms. Any idea why I'm not having a difference between the two models locally ?
Which backend dou you use? TensorRT or ONNXruntime? FP16 or FP32? What's the version of your backend?
@triple-Mu Am using Pytorch 1.8 FP16 no TensorRT nor ONNXruntime