deepstream-services-library icon indicating copy to clipboard operation
deepstream-services-library copied to clipboard

Can NOT RUN THE dsl-test-app.exe

Open howso123 opened this issue 1 year ago • 7 comments

My system is dgpu . ubuntu 20.04 GPU NVIDIA GeForce RTX 2060 NVIDIA Driver 525.116.03 CUDA 11.8 deepstream 6.2 I follow the docs installing-dependencies.md and building-dsl.md went smoothly。 but when I run ./dsl-test-app.exe,there many errors. I dont know where the problem is. some warning when running sudo python3 make_caffemodel_engine_files.py . image

Then run ./dsl-test-app.exe error. image

how can I run the dsl-test

howso123 avatar May 06 '23 12:05 howso123

@howso123 there are additional dependencies required to run all test assets. Please skip this step. I've just removed it from the documentation.

The warnings you see when building the model engine files are expected.

rjhowell44 avatar May 06 '23 15:05 rjhowell44

@howso123 there are additional dependencies required to run all test assets. Please skip this step. I've just removed it from the documentation.

The warnings you see when building the model engine files are expected.

Thank you for your reply. I skip this step. But when I try to run Python exsamples, it displays like this. image

I use the test code image It show image

howso123 avatar May 08 '23 01:05 howso123

@howso123 you happened to pick a WebRTC Sink example which requires GStreamer 1.18. If you're on ubuntu 20.04, then most likely you have GStreamer 1.16.

You can check your version with.

gst-launch-1.0 --gst-version

you can also check to see if the WebRTC plugin has been installed with

gst-inspect-1.0 webrtcbin

Were you looking to use the WebRTC Sink specifically, or were you just randomly trying test cases?


A couple of quick pointers.

  1. I would strongly suggest that you enabled DSL logging when running the examples... or your own code. It may seem like a lot of information at first, but will be extremely helpful once you get more familiar with DSL. The following statement will enable error level logs for all gstreamer components and info-level logs for DSL.
export GST_DEBUG=1,DSL:4
  1. Most of the examples are set to run on the Jetson Nano... meaning the paths to the inference config file and model engine file are to the Nano versions. A minor change is required to run on dGPU. I'm in the process of updating all examples to run on both platforms which should be complete in the next release. For now, just look for the following variables at the top of each file
# Filespecs for the Primary GIE
primary_infer_config_file = \
    '/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary_nano.txt'
primary_model_engine_file = \
    '/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine'

and change them to the following

# Filespecs for the Primary GIE
primary_infer_config_file = \
    '/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt'
primary_model_engine_file = \
    '/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b8_gpu0_int8.engine'

rjhowell44 avatar May 09 '23 02:05 rjhowell44

@howso123 you happened to pick a WebRTC Sink example which requires GStreamer 1.18. If you're on ubuntu 20.04, then most likely you have GStreamer 1.16.

You can check your version with.

gst-launch-1.0 --gst-version

you can also check to see if the WebRTC plugin has been installed with

gst-inspect-1.0 webrtcbin

Were you looking to use the WebRTC Sink specifically, or were you just randomly trying test cases?

A couple of quick pointers.

  1. I would strongly suggest that you enabled DSL logging when running the examples... or your own code. It may seem like a lot of information at first, but will be extremely helpful once you get more familiar with DSL. The following statement will enable error level logs for all gstreamer components and info-level logs for DSL.
export GST_DEBUG=1,DSL:4
  1. Most of the examples are set to run on the Jetson Nano... meaning the paths to the inference config file and model engine file are to the Nano versions. A minor change is required to run on dGPU. I'm in the process of updating all examples to run on both platforms which should be complete in the next release. For now, just look for the following variables at the top of each file
# Filespecs for the Primary GIE
primary_infer_config_file = \
    '/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary_nano.txt'
primary_model_engine_file = \
    '/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine'

and change them to the following

# Filespecs for the Primary GIE
primary_infer_config_file = \
    '/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt'
primary_model_engine_file = \
    '/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b8_gpu0_int8.engine'

Thanks for your reply. I want to use the WebRTC Sink. I have already installed GStreamer 1.18 with source code image image

When I run “make -j 4“,it show image

howso123 avatar May 10 '23 10:05 howso123

@howso123 my apologies for the tardy response. I've been traveling and away from desk. Unfortunately, you've run into a merge error that was missed by my CI (currently running on GStreamer 1.16 only). Something to address in the current release.

Please consider the WebRTC Sink implementation somewhat of a "work in progress". There are two additional open issues.

  1. #666
  2. #956

which I plan to focus on this week.

Sorry for the rough start out of the gate. All should be resolved shortly.

rjhowell44 avatar May 13 '23 17:05 rjhowell44

Thankes . I have already solved this problem. I try to add multiple outputs now. When I use dsl_source_file_new, it can play route 80 at the same time in the browse and window. image image

But use dsl_source_rtsp_new , it up to 24 channels can be played in the browse and window. I use dsl_pipeline_streammux_batch_properties_set to set streammux, but It doesn't work . image How can I set it up ?

And when I refresh webrtm.html , the window stopped and browse white page image image

howso123 avatar May 22 '23 09:05 howso123

@howso123 re: But use dsl_source_rtsp_new , it up to 24 channels can be played in the browse and window. I use dsl_pipeline_streammux_batch_properties_set to set streammux, but It doesn't work.

This seems to be more of an RTSP Source issue/limitation to me. First the pipeline will automatically set the batch-size to the number of sources, unless you override the value by calling dsl_pipeline_streammux_batch_properties_set. This does not seem to be a batch-size issue.

To look into this, I need logs. Please set the logging level as follows.

export GST_DEBUG=1,DSL:4

You can redirect the logs to a file with

export GST_DEBUG_FILE=./log.txt

I need to see the both the passing case with less than 24 RTSP cameras, and the failing case with more than 24.... thanks.

re: And when I refresh webrtm.html , the window stopped and browse white page

Yes, this is one of the issues I mentioned above... please review https://github.com/prominenceai/deepstream-services-library/issues/955#issuecomment-1546717332

I'm just wrapping up a couple of items for one of my sponsors (top priority). These issues are next in the queue.

rjhowell44 avatar May 23 '23 06:05 rjhowell44