gstreamer-imx
gstreamer-imx copied to clipboard
Encoder hang when used with tee on newer GStreamer
With the new GStreamer, tee now partially implement allocation query, though some hangs are now observed with the imx encoder. This was initially reported here:
https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/358#note_119769
But was not reproduced without the IMX elements. That hang is encountered when the tee pad is dynamically added.
imxv4l2videosrc ! imxipuvideotransform ! capsfilter ! tee ! fakesinkthat works until we add something new to the tee:... queue ! videorate ! imxvpuenc_h264 ! capsfilter ! appsink
To help further, tee with single pad, will negotiate allocation (rather then blocking it) but will prevent a downstream pool /allocator from being used.
The video source will then have to allocate the buffers. As they are IMX elements, they will likely negotiate something IMX specific. When the new branch is added, in this case fakesink, a renegotiation should start, the allocation query will now fail (because of fakesink), and the source should turn off zero copy. Using fakevideosink would have succeeded the allocation query, though the number of buffer should have been increased by 1, so the source should allocate one more buffer to avoid the stall.
Because I don't have the required setup, I could not investigate further.
Cc @sascha.dierberg
Can you, please, test with the latest release and update here with the status, @ndufresne ?
@angolini I simply moved the reported issue in upstream GStreamer to here. I'll ping @sascha.dierberg on the gitlab issue, with hope shell give it a try. Otherwise, I think you can assume the issue fixed, best would be to point to the PR fixing it though.
Because we had a refactor of this package I'm not sure we would be able to point to a PR, but I understand (and I also prefer) if we could point to a set of PR links.
I will keep this issue while I'm reviewing all the 2019/2020 issues. When I reach August/2021 I rethink about how to handle this
Thanks a lot for the update
I am still not sure about this issue, but I'm closing it. In case we need, we can reopen it on the future, or maybe create a new one pointing the newer releases
Reopening this since some people still seem to have this problem (see https://github.com/Freescale/gstreamer-imx/issues/306 for example). @ndufresne , I wrote this example to attempt to recreate the issue. Can you take a look at it to see if it is correct?
Your example look fine, the only thing strange is the use of appsink in the new branch, since you are not pulling from it, it will eventually hang for other reasons I think. Perhaps best to replace it with fakesink to make sure you are reproducing the same bug. Now, a backtrack of all threads is what we are looking for.
I kept the appsink to recreate what you wrote:
... queue ! videorate ! imxvpuenc_h264 ! capsfilter ! appsink
But yeah, swapping it with a fakesink is no big deal. However, I could not reproduce the problem. Note that I was using Yocto Dunfell and master, GStreamer 1.18.2, and latest gstreamer-imx master and latest libimxvpuapi2 master. Pinging @feelwa17 who reported a similar or the same issue - if possible, please retry with these versions.
Hi,
We have same issue with tee element and imx elements, when one branch is "late", the other one is also impacted.
Our pipeline is used to display video from a camera on a screen and sent it via UDP in same time :
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=640,height=640,framerate=20/1 ! tee name=t \
! queue ! imxvpuenc_h264 bitrate=0 quantization=25 ! rtph264pay ! udpsink host=<ip> port=1234 t. \
! queue ! imxg2dvideotransform ! video/x-raw,width=128,height=128 ! fbdevsink device=/dev/fb0
This pipeline run fine when "udpsink" branch is in time, but when connection with client is "lagging" (move away from client), stream is impacted (which is expected behaviour) BUT branch for video on framebuffer is also impacted !
For references : board : i.MX8MMQ gstreamer-imx version : 2.1.0 libimxvpuapi version : 2.2.1 libimxdmabuffer version : 1.1.2
@legerch Late reply, but your case is probably caused by backpressure. None of the queues are leaky, so when the branch that sends data through UDP experiences problems, its queue element might reach its capacity, and thus block. When the tee element then tries to push frames downstream, it eventually gets blocked by the filled queue.
Note that UDP sockets can block when sending. See here for a possible explanation.
I'd try to set both queues' leaky properties to upstream.
I tried to reproduce this several times, and it just won't happen. I ran tests on an imx6 sabre SD machine, using Yocto Kirkstone, and GStreamer 1.20.5. I modified my encoder-tee-test.c example (linked further up), and modified it to use fakesink instead of appsink just like @ndufresne suggested. I ran it with this command line:
GST_DEBUG=2,*imx*:5,*basesink*:9 ./encoder-tee-test
The logs indicate that the sinks do get data. Nothing freezes.
So, I have to close this issue due to lack of data. It might have been fixed already by other commits. If anyone sees problems with tee again, please reopen, and post the GStreamer version, the exact build environment (for example, what Yocto version), and logs (with GST_DEBUG set to "2,*tee*:7,*imx*:7").