Huy Tran Van
Huy Tran Van
I have the same problem.
@marcoslucianops could you please share code to evaluate .engine model?
> @marcoslucianops could you please share code to evaluate .engine model? Do I need to use file "libnvdsinfer_custom_impl_Yolo.so" generated from command "CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo" for evaluation or only use...
> My eval code is created based on [deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps) with some custom implementations (image batch input, pycocotools, etc). It uses DeepStream to generate the JSON to be evaluated by pycocotools....
> In the models I've tested, there's no mAP difference between FP32 and FP16 engines. Are you using the DeepStream to output the bboxes? Yes. I run deepstream app for...
> In the kitti output, the bboxes coordinates are related to the streammux resolution you set. You need to change them according to each validation image resolution. Yes, I recognized...
> Did you set > > ``` > [class-attrs-all] > nms-iou-threshold=0.65 > pre-cluster-threshold=0.001 > topk=300 > ``` > > In the config_infer_primary_yoloV7.txt file? Did you use the above config to...
> The evaluation uses different NMS and confidence thresholds. Try with the values I sent. Thanks a lot for supporting me. I am going to try it now😍
> Did you set > > ``` > [class-attrs-all] > nms-iou-threshold=0.65 > pre-cluster-threshold=0.001 > topk=300 > ``` > > In the config_infer_primary_yoloV7.txt file? I used this set up, mAP is...
> In the models I've tested, there's no mAP difference between FP32 and FP16 engines. Are you using the DeepStream to output the bboxes? @marcoslucianops Do You mean Yolov7 model?...