Scaled-YOLOv4-TensorRT
Scaled-YOLOv4-TensorRT copied to clipboard
Inference speed question
In the README.md, it is mentioned that the inference speed on TX2 is around 10-11ms. Is this speed measured around the function doInference here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L496.
Or around context.enqueue line here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L392
Thank you for sharing this project.
In the README.md, it is mentioned that the inference speed on TX2 is around 10-11ms. Is this speed measured around the function doInference here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L496.
Or around context.enqueue line here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L392
Thank you for sharing this project.
The first one, #L496