realtime_object_detection icon indicating copy to clipboard operation
realtime_object_detection copied to clipboard

High Memory Usage on Jetson TX2

Open root1369 opened this issue 5 years ago • 5 comments

Hello, I just gave a try on your project. I've been trying to set up the environment for a couple of days. And finally could run it with a little bit changes in the mention configuration on your repo. My configuration is as following: Jetson TX2:

  1. Jetpack 3.3 (L47 28.2.1)
  2. Opencv 3.4.1 with the contrib modules
  3. Tensorflow 1.9 with gpu My target was to set up opencv 3.4.1 and Tensorflow 1.6. But there were hundreds of error while installing Tensorflow! It's a just a simple pip installation process, but it took a lot of sweat.

Now, I got an average of 20 FPS which is absolutely breathtaking at this moment. Yet there is a problem, a huge problem I would say. While running the camera stream and video file, memory usage is about 7 Gb with/without visualization which is hard to accept! I was wondering if it's caused by the version of tensorflow. What was your memory usage? Can it be implemented in caffe or pytorch? I would say it's the best performer till now with a huge resource. How do I optimize it as I will be running a whole lot other programs with the detector? Thanks

root1369 avatar Jul 06 '19 01:07 root1369

Hi @magiccreator69,

Unfortunately, the ARM version of Tensorflow requires a large amount of memory resources when loading the CUDA library. This problem can not be solved. For both speed and memory resources, TensorRT/C ++ will be the best answer.

naisy avatar Jul 07 '19 04:07 naisy

Thanks for a prompt response. You meant should optimize the models using TensorRT, right? Like mentioned here? https://github.com/NVIDIA-AI-IOT/tf_trt_models That way less memory resource is used? No harm in trying though. I'll give it a try and keep you posted.

root1369 avatar Jul 08 '19 05:07 root1369

Hi @magiccreator69,

No. TF-TRT is python. That uses tensorflow. You can try https://github.com/Ghustwb/MobileNet-SSD-TensorRT TensorRT/C++ is really different from TF-TRT.

naisy avatar Jul 08 '19 05:07 naisy

well. in that case, I don't need to install tensorflow, right? just opencv and cuda libraries are fine? If I don't need to use tensorflow for this, do I need to use any other framework?

root1369 avatar Jul 08 '19 06:07 root1369

Hi @magiccreator69,

Yes, you don't need tensorflow. The framework is TensorRT. It is included in JetPack 3.3. https://developer.nvidia.com/embedded/jetpack-3_3

naisy avatar Jul 08 '19 06:07 naisy