deepstream-services-library icon indicating copy to clipboard operation
deepstream-services-library copied to clipboard

Capture n_frame_c image by opencv

Open subzeromot opened this issue 9 months ago • 22 comments

I want to capture frame and save to local by opencv

frame = np.array(n_frame_c, copy=True, order='C') frame_org = cv2.cvtColor(frame_org, cv2.COLOR_RGBA2BGRA)

But to do that, the frame should be converted into RGBA first. With deepstream-python-app, I will need to create a caps-filter element and add to pipeline:

caps = Gst.ElementFactory.make("capsfilter", "filter1") caps.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM),format=RGBA"))

How can I create this caps-filter in DSL? In DslSourceBintr.cpp, I tried to update m_bufferOutFormat but it seem not work, I still get crash when call frame = np.array(n_frame_c, copy=True, order='C')

std::wstring L_bufferOutFormat(DSL_VIDEO_FORMAT_RGBA); m_bufferOutFormat.assign(L_bufferOutFormat.begin(), L_bufferOutFormat.end());

subzeromot avatar Apr 29 '24 11:04 subzeromot

@subzeromot Please see the reference section on Video Buffer Conversion Under the Source documentation

...specifically dsl_source_video_buffer_out_format_set to set the output format of your source to RGBA

rjhowell44 avatar Apr 29 '24 21:04 rjhowell44

thanks, video conversion seems ok, but I still can not copy n_frame_c. Have no idea. Not see any thing in log file

subzeromot avatar Apr 30 '24 04:04 subzeromot

image This is my pipeline. I put pph after on-screen-display (osd).

subzeromot avatar Apr 30 '24 09:04 subzeromot

I think I need to do conversation in GstBin-osd. How can I do that?

subzeromot avatar Apr 30 '24 10:04 subzeromot

I set process-mode in nvdsosd to CPU, so the output sink should be RBGA, right? Why in the pipeline, it still shows NV12?

subzeromot avatar Apr 30 '24 10:04 subzeromot

@subzeromot ... There is definitely a problem here. What's interesting is that it works fine with just one source,

playing

but fails with two sources... even with the same components downstream.

I'll be working on this tomorrow.

rjhowell44 avatar May 01 '24 06:05 rjhowell44

@rjhowell44 Actually, it works with multi video source, but rtsp source is not work, even with one source

subzeromot avatar May 01 '24 09:05 subzeromot

image This is my pipeline. I put pph after on-screen-display (osd).

Why elements in sources bin are not link to each other?

subzeromot avatar May 01 '24 14:05 subzeromot

Because they are failing to link to the streammuxer.. if you have logging on you will see the error messages.

it's diffidently a race condition in the order the components are linked and how the caps are negotiated. The RTSP Source elements are always the last to link because of the dynamic stream (plugin) selection; H264 vs H265.

I can fix this by adding more specific format control downstream.

rjhowell44 avatar May 01 '24 16:05 rjhowell44

@subzeromot I take that all back. Everything is working fine for me. I had missed changing the format on one of my sources. The following file uses to http uri sources and two rtsp sources... You should be able to run this if you update the rtsp uri. Just strip off the .txt extension.

4_source_pgie_iou_tracker_tiler_osd_custom_pph_window.py.txt

and I can see from the graph that the format is RGBA throughout the pipeline. 4-rgba-sources

I can't see your image above. looks like you posted from a private repo. But in in your first image above I see 2 sources with an OSD... but no Tiler or Demuxer. The Streammux is batching the two streams from the sources. OSD can only handle a single, non-batched input buffer.. You will need to Tile the streams (see script attached) or add a Demuxer with two branches, each with their own OSD and Sink.

Please send me a log if you have any further issues.

export GST_DEBUG=1,DSL:4

rjhowell44 avatar May 01 '24 18:05 rjhowell44

why your graph look so difference? I run your script and it work perfect. But if I remove 2 sources (uri-source, uri-source-2), only play with rtsp sources, they still can not convert to RBGA, but if I add back one uri source, it works again. Did you use main branch or others to run this script?

subzeromot avatar May 02 '24 16:05 subzeromot

Sorry @subzeromot , You're correct. I realized I was testing with my v0.30.alpha dev branch. I have not had a chance to retest with the Master branch, but I have confirmed that I can run v0.30.alpha with just one or two RTSP sources.

I will test with master tomorrow. Please try with the v0.30.alpha release if you can. I've optimized a couple of components in this branch, but I'll be surprised if it fixed a bug I was unaware of.

rjhowell44 avatar May 03 '24 07:05 rjhowell44

thanks, let me try with v0.30

subzeromot avatar May 03 '24 07:05 subzeromot

@subzeromot any update on this?

rjhowell44 avatar May 03 '24 14:05 rjhowell44

seem still not working, I run with v0.30.alpha, and there is my pipeline when I run 2 RTSP sources pipeline

subzeromot avatar May 03 '24 16:05 subzeromot

@subzeromot Please provide me with a log file so I can see where/why it is failing for you

export GST_DEBUG_FILE=./log.txt
export GST_DEBUG=1,DSL4

... and I'm unable to expand that image above... says it's from a private repo

rjhowell44 avatar May 03 '24 16:05 rjhowell44

GST_DEBUG.log There is my log files. The pipeline runs ok, but when I add script to copy frame from buffer, it crash Segmentation fault (core dumped)

frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data) n_frame_c = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id) frame_org = np.array(n_frame_c, copy=True, order='C') frame_org = cv2.cvtColor(frame_org, cv2.COLOR_RGBA2BGRA)

subzeromot avatar May 03 '24 17:05 subzeromot

pipeline.zip Let me know if you can download this pipeline image file

subzeromot avatar May 03 '24 17:05 subzeromot

@subzeromot this appears to be a different issue and I believe the problems is caused by this statement.

n_frame_c = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id)

please remove the hash() and try with

n_frame_c = pyds.get_nvds_buf_surface(buffer, frame_meta.batch_id)

I will try and do the same when I get a few moments.

Otherwise, your pipeline looks good.

rjhowell44 avatar May 03 '24 18:05 rjhowell44

I removed hash(), but nothing happen. Anw, I also try to set process mode for OSD to GPU and still same ...

subzeromot avatar May 03 '24 18:05 subzeromot

@subzeromot I've been able to get this to work. The key is that the memory type must be changed to DSL_NVBUF_MEM_TYPE_CUDA_UNIFIED. See get-nvds-buf-surface

It is sufficient to do this at the Streammux with the below as long as you add your pad-probe-handler before the Tiler.

        retval = dsl_pipeline_streammux_nvbuf_mem_type_set('pipeline',
            DSL_NVBUF_MEM_TYPE_CUDA_UNIFIED)

Here's my updated script. 4_source_pgie_iou_tracker_tiler_osd_custom_pph_window.py.txt

Updated pipeline graph .. you can see Tiler component converts the memory back to cuda-device. new

The Tiler can be updated as well if for some reason you want the pad-probe-handler after the Tiler.

rjhowell44 avatar May 08 '24 05:05 rjhowell44

I plan to add this as an example and cover the requirements under a new section in under the overview.. called working with opencv

rjhowell44 avatar May 08 '24 05:05 rjhowell44

New examples and overview section added to the v0.30.alpha release which has been release

rjhowell44 avatar May 29 '24 18:05 rjhowell44