neuralet icon indicating copy to clipboard operation
neuralet copied to clipboard

Problem with integration a Model Trained with TLT with DeepStream

Open alpha-carinae29 opened this issue 4 years ago • 2 comments

Hi everyone. I have tried to train a SSD Mobilenet V2 model with TLT V2.0 for pedestrian detection with Oxford Town Center dataset based on the instruction provided by Nvidia in here. My model have been trained successfully and after pruning and retraining the inference result was acceptable: 57 581 Then I tried to create a TensorRT engine from my exported .etlt model on a x86 device with RTX 1070 GPU and after setting up TensorRT 7.0 and TensorRT OOS I successfully created the TensorRT engine. Then I tried to integrate this engine with DeepStraem 5.0. I modified some sample config files to inference with my trained model and sink the output in a mp4 file. However I got very poor result as you can see: Screenshot from 2020-05-12 23-37-54 Screenshot from 2020-05-12 23-40-40

alpha-carinae29 avatar May 12 '20 19:05 alpha-carinae29

@mdegans it seems you are a master in Nvidia's products. It would be my pleasure to get your feedback about it.

alpha-carinae29 avatar May 12 '20 19:05 alpha-carinae29

@alpha-carinae29 Eh. I know some stuff. GStreamer is a language of it's own and I know maybe 20% of it. More of Nvidia's elements, since there aren't so many. I often ask for help on the DeepStream forum but they may decide to not support you because you're using GeForce rather than Tesla/Jetson. If you can, you may wish to also test on Jetson.

To address your issue:

  • To rule out the bounding box code, have you modified the bounding box drawing callback?
  • have you modified anything but the .ini ?
  • Have you tried using DeepStream to build the engine from the tlt encoded model and key instead of the engine?
  • does it work fine with a screen sink (eg. nvoverlaysink)?
  • have you tried another video file? I had issues with xvid. sample should be: /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

You can attempt to build the engines yourself and load them, but the engines aren't portable, so nvinfer (see "inputs") is designed to build them for you. You can see how the enginge is generated in the nvinfer source within /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer.

I've had an issue with the bounding boxes being drawn on the wrong area before, but it was related to the position of the osd element in the pipeline. If you have only modified the config .ini, I think you can rule that out. LMK when you've tried the above things.

mdegans avatar May 12 '20 20:05 mdegans