yolo-tensorrt
yolo-tensorrt copied to clipboard
batchsize 只能设置为1吗
您好,我想修改测试的batchsize,只是修改_p_net = std::unique_ptr<Yolo>{ new YoloV3(1, _yolo_info, _infer_param) };这里好像不行,请问还有别的地方需要修改吗 @enazoe
https://github.com/enazoe/yolo-tensorrt/blob/5ffe5869c50868c24193d317fa0be6b9d8f8e995/modules/class_yolo_detector.hpp#L84
这里和相关的地方
好像增大batchsize会提高速度,我准备试试,但是还是有一些问题 @enazoe
我试了下batchsize=2,yolov3可以正常推理,而yolov4只能得到第一张图片的结果,第二张结果为空。耗时为batchsize=1的两倍左右
yolov4-kHALF.engine (inputSize= 320x320, batchSize = 2) is working fine as depicted below:
yolov3-tiny-kHALF.engine (inputSize= 608x608, batchSize = 2) is also working fine as depicted below:
But yolov4-tiny-kHALF.engine (inputSize= 608x608, batchSize = 2) is working fine for the first image (batchId = 0) and not for the second one (batchId = 1):
@jstumpin @zhangxiaopang88 @chongzhong sorry, batch inference not support yet ,I will working on it and support soon. And welcom PR at same time.
Everything's fixed now, kudos @enazoe for the prompt action! yolov4-tiny-kHALF-batch2.engine:
yolov4-tiny-kHALF-batch2.engine (using person.jpg + dog.jpg example as per in result.jpg; slight difference due to yolov4 vs. yolov4-tiny):
yolov4-tiny-kHALF-batch4.engine (add std::string batchSize;
after yolo.h#L56, add _yolo_info.batchSize = "batch" + std::to_string(_config.n_max_batch);
after class_yolo_detector.hpp#L138, change class_yolo_detector.hpp#L144 to _yolo_info.enginePath = dataPath + "-" + _yolo_info.precision + "-" + _yolo_info.batchSize + ".engine";
, and add config_v4_tiny.n_max_batch = 4;
after sample_detector.cpp#L37):
@jstumpin good idea,and welcome pr :)