gstreamer: Could not use gstreamer.run_pipeline with MP4 videosrc
Description
I am trying to leverage the gstreamer.py pipeline definition to run inference on test MP4 data instead of via the /dev/video1 webcam.
Looking at the run_pipeline function, this appears to be supported by the following else case on line 223 with the use of qtdemux:
else:
demux = 'avidemux' if videosrc.endswith('avi') else 'qtdemux'
PIPELINE = """filesrc location=%s ! %s name=demux demux.video_0
! queue ! decodebin ! videorate
! videoconvert n-threads=4 ! videoscale n-threads=4
! {src_caps} ! {leaky_q} """ % (videosrc, demux)
When initializing the gstreamer pipeline, the following error occurs:
Warning: gst_parse_error: Delayed linking failed. (7): ./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
failed delayed linking some pad of GstDecodeBin named decodebin0 to some pad of GstVideoRate named videorate0
Error: gst-stream-error-quark: Internal data stream error. (1): qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstQTDemux:demux:
streaming stopped, reason not-linked (-1)
Click to expand!
Issue Type
Bug, Support
Operating System
Mendel Linux
Coral Device
Dev Board Mini
Other Devices
No response
Programming Language
Python 3.7
Relevant Log Output
No response
Did you get to the bottom of this? Closest I've come to running my own video on the Dev Board Mini is to convert the mp4 using the following:
ffmpeg -i drop.avi -an -vf "scale=960:540,format=yuv420p" -colorspace bt470bg -color_range tv -color_primaries smpte170m -color_trc bt709 -vcodec libx264 -brand mp42 -c:v libx264 -profile:v baseline drop.mp4
and then run it via:
gst-launch-1.0 filesrc location=drop.mp4 ! decodebin ! autovideoconvert ! autovideosink
I can't quite get this to work in the gstreamer.py script however
@thebruce87m : I was able to put something together with help from a RidgeRun example pipeline
The solution required a few extra components to be installed:
- Install libuav (we were missing the avdec_h264 gstreamer plugin)
- sudo apt install gstreamer1.0-libav
- Confirm plugin is installed
- gst-inspect-1.0 avdec_h264
I was only targeting MP4 files, so I dropped the AVI support and ended up with the following pipeline definition:
PIPELINE = """filesrc location=%s ! qtdemux name=demux demux.video_0 !
h264parse ! avdec_h264 ! videorate ! videoconvert n-threads=4 !
videoscale n-threads=4 ! {src_caps} ! {leaky_q} """ % (videosrc)
Let me know if this works for you and/or if there are any changes/optimizations that can be made. I admittedly only know enough GStreamer to be dangerous so would welcome a second set of eyes!
@jleger12
Thanks, with this edit to gstreamer.py along with changing to glimagesync I am able to run the example:
diff --git a/gstreamer/gstreamer.py b/gstreamer/gstreamer.py
index 13dcd05..adff762 100755
--- a/gstreamer/gstreamer.py
+++ b/gstreamer/gstreamer.py
@@ -223,10 +223,11 @@ def run_pipeline(user_function,
PIPELINE = 'rtspsrc location=%s'%videosrc
else:
demux = 'avidemux' if videosrc.endswith('avi') else 'qtdemux'
- PIPELINE = """filesrc location=%s ! %s name=demux demux.video_0
- ! queue ! decodebin ! videorate
- ! videoconvert n-threads=4 ! videoscale n-threads=4
- ! {src_caps} ! {leaky_q} """ % (videosrc, demux)
+
+ PIPELINE = """filesrc location=%s ! qtdemux name=demux demux.video_0 !
+ h264parse ! avdec_h264 ! videorate ! videoconvert n-threads=4 !
+ videoscale n-threads=4 ! {src_caps} ! {leaky_q} """ % (videosrc)
+
''' Check for the object tracker.'''
if trackerName != None:
if trackerName == 'mediapipe':
@@ -254,7 +255,7 @@ def run_pipeline(user_function,
t. ! {leaky_q} ! videoconvert ! videoscale ! {scale_caps} ! videobox name=box autocrop=true
! {sink_caps} ! {sink_element}
t. ! {leaky_q} ! videoconvert
- ! rsvgoverlay name=overlay ! videoconvert ! ximagesink sync=false
+ ! rsvgoverlay name=overlay ! videoconvert ! glimagesink sync=false
"""
if objectOfTracker:
mot_tracker = objectOfTracker.trackerObject.mot_tracker
Now what I don't understand is that the inference time claims to be 140ms, but the fps claims to be 0 fps - which I think really means very slow. This is reflected by the bounding boxes not updating very often compared to the video output.
Surely if the inference is 140ms I should be getting ~ 7 fps? Do you have any insights here? If not I'll add a new issue.
Thanks for your help!