Bug in Yolov8 INT8 Quantization
According this lien(https://github.com/wang-xinyu/tensorrtx/tree/master/yolov8#int8-quantization), I used xx.wts to transform the model into .engine in foamrt INT8. But I got this :
jetson@unbutu:~/Desktop/project/tensorrtx/yolov8_int8/build$ ./yolov8_det -s yolov8s_detect.wts detect.engine s
Loading weights: yolov8s_detect.wts
[04/14/2024-22:33:58] [W] [TRT] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Your platform support int8: true
Building engine, please wait for a while...
reading calib cache: int8calib.table
[04/14/2024-22:34:02] [E] [TRT] 1: Unexpected exception _Map_base::at
[04/14/2024-22:34:02] [E] [TRT] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Build engine successfully!
yolov8_det: /home/jetson/Desktop/project/tensorrtx/yolov8_int8/yolov8_det.cpp:23: void serialize_engine(std::string&, std::string&, std::string&, float&, float&, int&): Assertion `serialized_engine' failed.
Aborted (core dumped)
Through the codes in model.cpp, I found the input image folder path is fixed, which means :
// in src/model.cpp, line 277-279
auto* calibrator =
new Int8EntropyCalibrator2(1, kInputW, kInputH, "../coco_calib/", "int8calib.table", kInputTensorName);
config->setInt8Calibrator(calibrator);
The image fold path is wrong in the tutorial, if you have time , it's better to rewrite this.
Thanks for finding this.
In yolov5, it was using ./coco_calib/
https://github.com/wang-xinyu/tensorrtx/blob/c889b84df2e081d870bc28680f674b47070cf1d6/yolov5/src/model.cpp#L356
Can you help raise a PR?
Thanks for finding this.
In yolov5, it was using
./coco_calib/https://github.com/wang-xinyu/tensorrtx/blob/c889b84df2e081d870bc28680f674b47070cf1d6/yolov5/src/model.cpp#L356
Can you help raise a PR?
Sure. And are there any rules that I need to follow during the PR processe?
Refer this https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/contribution.md
您好,请问您解决了吗 @ChangjunDAI
Refer to this: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov8#int8-quantization