tf-cpp-pose-estimation icon indicating copy to clipboard operation
tf-cpp-pose-estimation copied to clipboard

Intuition of workaround

Open eraserpencil opened this issue 5 years ago • 0 comments

Hi! Thanks for the C++ port of tf-pose-estimation.

I manage to get an average of 6-6.5FPS on a video stream from ZED camera, both running on ROS on a TX2 with TF1..0. That compared to 5-5.5FPS with the original Python implementation (I'm unsure how the original author managed to get ~10FPS with the same setup)

Was wondering how'd the intuition behind the workaround came about. I'm new to Deep Learning and was hoping to squeeze out more performance. Would you know if the performance would just cause a slow startup time or it would affect runtime performance as well. How muxh different would it be to have TensorRT with this as compared to the python implementation.

eraserpencil avatar Aug 02 '19 07:08 eraserpencil