deepstream_tao_apps icon indicating copy to clipboard operation
deepstream_tao_apps copied to clipboard

multi-stream RTSP support

Open PythonImageDeveloper opened this issue 4 years ago • 1 comments

Hello, 1- This repo only work on jetpack 4.4 and deepstream 5.0 and TLT 2.0? 2- For Detectnet-V2 , is it possible to run multi-stream RTSP support? HOW? 3- If I want to run other codes along with deep stream, I want to do multi-stream RTSP decoding with HW deocer of jetson nano and pass some RTPS decoded to deep stream your repo and some RTSP decoded to my own python code for other progressing, Is is possible? I want to do this with docker. 4- In the models folder, only bin and etlt file existed, Is it enough for running? and If I want to put my training models ones of six-models but different input size and dataset, Is it possible to run with this repo codes related to its model? 5- Deep stream accept both TensorRT engine file and etlt file, but the TensorRT engine file is hardware dependent. Which mode has high FPS in inference?

PythonImageDeveloper avatar May 30 '20 20:05 PythonImageDeveloper

  1. Yes
  2. It is possible. Refer to https://forums.developer.nvidia.com/t/multi-stream-rtsp-on-jetson-nano/122357
  3. Please create topic in DS forum https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15
  4. For int8, there should be cal.bin, etlt model and your API key. For fp16/fp32, only etlt model and your API key are needed. Sure. you can run with your own etlt model. Need to specifiy in the cnfig files. See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_model_deepstream
  5. During running with DS, actually the etlt model will be converted to trt engine. So, the inference performance should be the same.

morganh-nv avatar Jun 13 '20 05:06 morganh-nv