ros_deep_learning icon indicating copy to clipboard operation
ros_deep_learning copied to clipboard

Increase net batch-size

Open r91andersson opened this issue 4 years ago • 2 comments

Is it planned in near future to improve the jetson-inference to support net with batch-sizes larger than 1 (that is current maximum)? @dusty-nv How much work would it be to fix such a feature? Would you be able to point out where to start dig in the code? Or do you think that would be to complex?

The background for this is that we have a SSDMobileNetV2 that runs at approx. 65 fps (so basically it handles 2 camera streams at 30 fps). We would like to run 6 cameras, this means that we would need total 3 Xavier AGX to run that setup.

But if it would be possible to increase the batch size to 6 instead, we could have all cameras capturing images on a synced trigger event, then we could collect those and make a batch of 6 images and run them on one Xavier instead.

r91andersson avatar May 05 '21 06:05 r91andersson

I don't currently have plans to do it in jetson-inference - for multi-camera setup, I would recommend at looking at DeepStream which has optimized performance for multi-camera streaming+inference, and you can also train your model with Transfer Learning Toolkit (TLT) which can prune your model and further improve the performance.

dusty-nv avatar May 05 '21 13:05 dusty-nv

@dusty-nv Ok, thanks for info!

r91andersson avatar May 05 '21 14:05 r91andersson