MobileNet-SSD-TensorRT
MobileNet-SSD-TensorRT copied to clipboard
batch inference
hi Ghustwb,
for batch processing of images, i changed BATCH_SIZE = 9 and const size_t size = width * height * sizeof(float3) * BATCH_SIZE and filled the unifrom_data accordingly (uniform_data[volImg + line_offset + j] = ... where volImg changes according to batch index). I also changed first dim: 9 in .prototxt. To obtain detections i used indexing like output[ (batch_idx*100 + k) * 7 + 1 ]. There is only person (class 3) in my frames. For batch_id = 0 i got correct results (one person). But for other bathces detection data is like 1, 0.80123, 0.257755, 0.764545, 0.86875, 0.909765 # 2, 7.80123, 0.364645, 0.26875, 0.809765, 0.654343.
what i am doing wrong, can you help me? thanks!
Hey, I had the same problem and I finally solved it:
- In createConcatPlugin set the second param to false (so it will not ignore the batch size).
- In cudaSoftmax (in the enqueue function of softmaxplugin) you need to modify the first parameter as follows: 1917 * 5 * batchSize.
I hope it helps!