ros_deep_learning
ros_deep_learning copied to clipboard
"No EGLDisplay to create EGLImage" on headless remote Jetson.
Hi,
I am trying to get the video_source node working on a headless remote Jetson. I am running roslaunch ros_deep_learning video_source.ros1.launch input:=/inputs/test.mp4
and receiving the following output for each frame:
[gstreamer] gstBufferManager -- map buffer size was less than max size (1008 vs 3110400) [gstreamer] gstBufferManager recieve caps: video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)20/1 [gstreamer] gstBufferManager -- recieved first frame, codec=h264 format=nv12 width=1920 height=1080 size=3110400 [gstreamer] gstBufferManager -- recieved NVMM memory NvEGLImageFromFd: No EGLDisplay to create EGLImage [gstreamer] gstBufferManager -- failed to map EGLImage from NVMM buffer [gstreamer] gstDecoder -- failed to handle incoming buffer
I understand that EGL Display doesn't support SSH. We've actually had a similar error while testing DeepStream and unset $DISPLAY
is enough there, but it didn't work here.
Do I need to do anything else for the script to stop looking for a display? Is a display mandatory to convert the input to a ROS message?
Some details about the setup:
- Platform: AGX Xavier
- Jetpack: 3.8
- ROS distro: Noetic (Built from source)
All is deployed on a L4T based docker container.
Thank you very much for your help.
Edit: I managed to get it working by plugging a monitor and setting $DISPLAY = :1
, but I would still need to use it headless if possible.
Thanks for posting the issue and solution, I hit the same issue.