jetson-inference icon indicating copy to clipboard operation
jetson-inference copied to clipboard

RTSP source in detectnet-camera example

Open aprentis opened this issue 6 years ago • 18 comments

Hi everybody.

I want to get images from my Hikvision camera. RTSP link is: rtsp://192.168.196.243:554/Streaming/Channels/102 I have successfully used this pipeline in my OpenCV(3.3.1) app, but it does not work in detectnet-camera example, which i`ve built on my x86_64 PC(Cuda 8.0, OpenCV 3.3.1, TensorRT version 2.1, build 2102 )

I have added this line to code cloned from this repo:

ss<< "rtspsrc location=rtsp://192.168.196.243:554/Streaming/Channels/102 protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink ";

Those lines were in the output.

`[gstreamer] initialized gstreamer, version 1.8.3.0 [gstreamer] gstreamer decoder pipeline string: rtspsrc location=rtsp://192.168.196.243:554/Streaming/Channels/102 protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink

detectnet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp)

[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0 [gstreamer] gstreamer changed state from NULL to READY ==> typefind [gstreamer] gstreamer changed state from NULL to READY ==> decodebin0 [gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0 [gstreamer] gstreamer changed state from READY to PAUSED ==> typefind [gstreamer] gstreamer msg progress ==> rtspsrc0 [gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer msg new-clock ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0 [gstreamer] gstreamer msg progress ==> rtspsrc0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0 [gstreamer] gstreamer msg progress ==> rtspsrc0

detectnet-camera: camera open for streaming

detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4*)imgRGBA, make_float2(0.0f, 255.0f), (float4*)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /home/aprentis/jetson-inference/detectnet-camera/detectnet-camera.cpp:247 [cuda] registered 14745600 byte openGL texture for interop access (1280x720) `

What`s wrong? Thanks in advance!

aprentis avatar Nov 15 '17 10:11 aprentis

As it says, 'detectnet-camera: failed to convert from NV12 to RGBA'

There is a conversion to RGBAf format from NV12 or RGB in the file gstCamera.cpp: in function ConvertRGBA. Either you modify the check for onboard camera or change your pipeline to use 'nvvidconv' to generate the output in 'NV12' format instead of using videoconvert.

Hope this works.

kanakiyab avatar Nov 17 '17 18:11 kanakiyab

I had the same issue and managed to fix it using @omaralvarez's Pull Request described here: https://github.com/dusty-nv/jetson-inference/issues/88

After taking in the pull request code, in detectnet-camera I replaced the line: gstCamera* camera = gstCamera::Create(DEFAULT_CAMERA);

With:

gstPipeline* pipeline = gstPipeline::Create(
		"rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov ! queue ! rtph264depay ! h264parse ! queue ! omxh264dec ! appsink name=mysink",
		240,
		160,
		12
);

(Replacing the RTSP address, height & width accordingly + #include "gstPipeline.h" at the top)

alinabee avatar Nov 23 '17 01:11 alinabee

Yes, that will work fine because he gets rid of the onboard camera check and hence, there is just one conversion method, RGBtoRGBAf which is the one you should use.

bkanaki avatar Nov 23 '17 04:11 bkanaki

Could you post the full code on how to use gstPipeline?

niccoloraspa avatar Jul 24 '18 08:07 niccoloraspa

Dear, is it possible to see full code for this? I'm new to GStreamer and i'm looking for several days already for a simple an clear example how to play a rtsp stream from an IP camera from the imagenet-camera source code. your help is much appreciated

LSAMIJN avatar Oct 15 '19 12:10 LSAMIJN

The full code for gstPipeline is in the Pull Request: https://github.com/dusty-nv/jetson-inference/pull/93/commits/2717e8914dad03116641247ed2dd9ebc88379d4c

alinabee avatar Oct 15 '19 22:10 alinabee

Hi,

I managed to get a RTSP stream from an IP camera with pipeline. But I don't get how to make it work with the gstCamera that is used in the jetson-inference example how should i make the pipeline as video source of the gstCamera? I'm really sorry if i ask supid questions but this is completely new to me. When I start program now, I have my IP camera live image on top of the onboard camera of my jetson...

complete c++ code:

#include "gstCamera.h" #include "glDisplay.h" #include "detectNet.h" #include "commandLine.h" #include <signal.h>

bool signal_recieved = false;

void sig_handler(int signo) { if( signo == SIGINT ) { printf("received SIGINT\n"); signal_recieved = true; } }

int usage() { printf("usage: detectnet-camera [-h] [--network NETWORK] [--threshold THRESHOLD]\n"); printf(" [--camera CAMERA] [--width WIDTH] [--height HEIGHT]\n\n"); printf("Locate objects in a live camera stream using an object detection DNN.\n\n"); printf("optional arguments:\n"); printf(" --help show this help message and exit\n"); printf(" --network NETWORK pre-trained model to load (see below for options)\n"); printf(" --overlay OVERLAY detection overlay flags (e.g. --overlay=box,labels,conf)\n"); printf(" valid combinations are: 'box', 'labels', 'conf', 'none'\n"); printf(" --alpha ALPHA overlay alpha blending value, range 0-255 (default: 120)\n"); printf(" --camera CAMERA index of the MIPI CSI camera to use (e.g. CSI camera 0),\n"); printf(" or for VL42 cameras the /dev/video device to use.\n"); printf(" by default, MIPI CSI camera 0 will be used.\n"); printf(" --width WIDTH desired width of camera stream (default is 1280 pixels)\n"); printf(" --height HEIGHT desired height of camera stream (default is 720 pixels)\n"); printf(" --threshold VALUE minimum threshold for detection (default is 0.5)\n\n");

printf("%s\n", detectNet::Usage());

return 0;

}

int main( int argc, char** argv ) { /* * parse command line */ commandLine cmdLine(argc, argv);

if( cmdLine.GetFlag("help") )
	return usage();


/*
 * attach signal handler
 */
if( signal(SIGINT, sig_handler) == SIG_ERR )
	printf("\ncan't catch SIGINT\n");

    
    
    
    
    
    /**Added by me for rtsp streaming */

    GstElement *pipeline;
    GstBus *bus;
    GstMessage *msg;

      /* Initialize GStreamer */
      gst_init (&argc, &argv);

      /* Build the pipeline */
      pipeline = gst_parse_launch ("playbin uri=rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=1&unicast=true&proto=Onvif",NULL);

      /* Start playing IP Camera */
      gst_element_set_state (pipeline, GST_STATE_PLAYING);

      
      /*         
       How to set pipeline as gstCamera source instead of onboard cam of TX2?
       */
      
      
/* create the camera device */
gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),cmdLine.GetInt("height", gstCamera::DefaultHeight),0);    
    
if( !camera )
{
	printf("\ndetectnet-camera:  failed to initialize camera device\n");
	return 0;
}

printf("\ndetectnet-camera:  successfully initialized camera device\n");
printf("    width:  %u\n", camera->GetWidth());
printf("   height:  %u\n", camera->GetHeight());
printf("    depth:  %u (bpp)\n\n", camera->GetPixelDepth());


/*
 * create detection network
 */
detectNet* net = detectNet::Create(argc, argv);

if( !net )
{
	printf("detectnet-camera:   failed to load detectNet model\n");
	return 0;
}

// parse overlay flags
const uint32_t overlayFlags = detectNet::OverlayFlagsFromStr(cmdLine.GetString("overlay", "box,labels,conf"));


/*
 * create openGL window
 */
glDisplay* display = glDisplay::Create();

if( !display ) 
	printf("detectnet-camera:  failed to create openGL display\n");


/*
 * start streaming
 */
if( !camera->Open() )
{
	printf("detectnet-camera:  failed to open camera for streaming\n");
	return 0;
}

printf("detectnet-camera:  camera open for streaming\n");


/*
 * processing loop
 */
float confidence = 0.0f;

while( !signal_recieved )
{
	// capture RGBA image
	float* imgRGBA = NULL;
	
	if( !camera->CaptureRGBA(&imgRGBA, 1000) )
		printf("detectnet-camera:  failed to capture RGBA image from camera\n");

	// detect objects in the frame
	detectNet::Detection* detections = NULL;

	const int numDetections = net->Detect(imgRGBA, camera->GetWidth(), camera->GetHeight(), &detections, overlayFlags);
	
	if( numDetections > 0 )
	{
		printf("%i objects detected\n", numDetections);
	
		for( int n=0; n < numDetections; n++ )
		{
			printf("detected obj %i  class #%u (%s)  confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
			printf("bounding box %i  (%f, %f)  (%f, %f)  w=%f  h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height()); 
		}
	}	

	// update display
	if( display != NULL )
	{
		// render the image
		display->RenderOnce(imgRGBA, camera->GetWidth(), camera->GetHeight());

		// update the status bar
		char str[256];
		sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, precisionTypeToStr(net->GetPrecision()), net->GetNetworkFPS());
		display->SetTitle(str);

		// check if the user quit
		if( display->IsClosed() )
			signal_recieved = true;
	}

	// print out timing info
	net->PrintProfilerTimes();
}


/*
 * destroy resources
 */
printf("detectnet-camera:  shutting down...\n");

SAFE_DELETE(camera);
SAFE_DELETE(display);
SAFE_DELETE(net);

printf("detectnet-camera:  shutdown complete.\n");
return 0;

}

LSAMIJN avatar Oct 16 '19 14:10 LSAMIJN

I've updated the jetson inference code with alinabee revisions, when I run sudo make I get the following error:

QMutex: No such file or directory #include <QMutex> compilation terminated.

Any idea how to get this dependency installed on my Jetson Nano?

Thanks

sms720 avatar Oct 17 '19 11:10 sms720

I also get this:

QMutex: No such file or directory #include compilation terminated.

Any ideas gratefully received!

jonwilliams84 avatar Nov 23 '19 16:11 jonwilliams84

Here is what I did to get my external Hikvision IP camera feed working with detectnet example.

## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms

## Install ffmpeg
$ sudo apt install ffmpeg

## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback

## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
    -i rtsp://camuser:[email protected]/Streaming/channels/502 \
    -vcodec rawvideo  -vf scale=640:480 -f v4l2 \
    -threads 0 -pix_fmt yuyv422 /dev/video0

## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py  \
   --network=ped-100 \
   --width=640 --height=360 \
   --camera=/dev/video0 \
   --threshold=1.8 --overlay=box

linusali avatar Apr 21 '20 17:04 linusali

I tried that also...I've been trying to get IP camera to work for quite a while with no luck...I followed your patch and go a gstreamer error below..The stream runs great in browser but nothing on my TX2. .Any ideas? Any Hints? Best,

Sudo modprobe v4l2loopback

ffmpeg -thread_queue_size 512 -i http://192.168.1.9:8888/ir.mjpeg -vcodec rawvideo -vf scale=320:240 -f v4l2 -threads 0 -pix_fmt yuv420p /dev/video1

./detectnet-camera --width=320 --height=240 --camera=/dev/video1 --overlay=box

OpenGL] glDisplay -- X screen 0 resolution: 1280x1024 [OpenGL] glDisplay -- display device initialized [gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer msg new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer msg stream-start ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0 [gstreamer] gstCamera onEOS [gstreamer] gstreamer v4l2src0 ERROR Internal data stream error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming stopped, reason not-negotiated (-4) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink detectnet-camera: camera open for streaming

flurpo avatar May 08 '20 05:05 flurpo

Here is what I did to get my external Hikvision IP camera feed working with detectnet example.

## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms

## Install ffmpeg
$ sudo apt install ffmpeg

## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback

## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
    -i rtsp://camuser:[email protected]/Streaming/channels/502 \
    -vcodec rawvideo  -vf scale=640:480 -f v4l2 \
    -threads 0 -pix_fmt yuyv422 /dev/video0

## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py  \
   --network=ped-100 \
   --width=640 --height=360 \
   --camera=/dev/video0 \
   --threshold=1.8 --overlay=box

I use the stream from my Raspberry Pi running MotionEyeOS using this method. Thanks for the solution. I did tweak the ffmpeg command line slightly. $ ffmpeg -thread_queue_size 512 -i http://192.168.0.142:8081/ -vcodec rawvideo -vf scale=1280:720 -f v4l2 -threads 0 -pix_fmt yuyv422 /dev/video2

$ detectnet-camera.py
--network=ped-100
--width=1280 --height=720
--camera=/dev/video2
--threshold=1.8 --overlay=box

neildotwilliams avatar Jun 21 '20 19:06 neildotwilliams

Here is what I did to get my external Hikvision IP camera feed working with detectnet example.

## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms

## Install ffmpeg
$ sudo apt install ffmpeg

## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback

## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
    -i rtsp://camuser:[email protected]/Streaming/channels/502 \
    -vcodec rawvideo  -vf scale=640:480 -f v4l2 \
    -threads 0 -pix_fmt yuyv422 /dev/video0

## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py  \
   --network=ped-100 \
   --width=640 --height=360 \
   --camera=/dev/video0 \
   --threshold=1.8 --overlay=box

That worked fine for me, thanks! But I had some issues that were solved wrapping up the rtsp url with "rtsp://...". Here is a video of everything working together: https://youtu.be/PLBffle0CcQ

engineer1982 avatar Jun 25 '20 13:06 engineer1982

Hello, how can I put rtsp in "videoOutput"? I need steram detectnet from jetson nano over lan by RTSP.

niyazFattahov avatar Dec 07 '21 13:12 niyazFattahov

Hi @niyazFattahov, jetson-inference/jetson-utils doesn't support RTSP output, as it requires support for special server code. Otherwise I would add it if it were simple.

Note that DeepStream supports RTSP output and has support for the RTSP server if you need that.

dusty-nv avatar Dec 07 '21 20:12 dusty-nv

What about this: https://github.com/GStreamer/gst-rtsp-server/blob/1.14.5/examples/test-launch.c can it be used somehow? I tested rtsp only from csi camera with gstreamer like this: ./test-launch "videotestsrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96" (https://forums.developer.nvidia.com/t/jetson-nano-faq/82953) thank you.

niyazFattahov avatar Dec 07 '21 20:12 niyazFattahov

In theory, yes, some similar code / dependencies would need to be integrated into the videoOutput class in order to support RTSP output.

dusty-nv avatar Dec 07 '21 20:12 dusty-nv

## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms

## Install ffmpeg
$ sudo apt install ffmpeg

## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback

## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
    -i rtsp://camuser:[email protected]/Streaming/channels/502 \
    -vcodec rawvideo  -vf scale=640:480 -f v4l2 \
    -threads 0 -pix_fmt yuyv422 /dev/video0

## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width  specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py  \
   --network=ped-100 \
   --width=640 --height=360 \
   --camera=/dev/video0 \
   --threshold=1.8 --overlay=box #

@neildotwilliams @dusty-nv

I've tried this on Jetson TX2 but I got this error

aititx2-2@aititx22-desktop:~$ ffmpeg -thread_queue_size 2631 -i "rtsp://aititx2-2@[email protected]:1935/profile" -vcodec rawvideo -vf scale=1280:720 -f v4l2 -threads 0 -pix_fmt yuyv422 /dev/video0
ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 2 packets
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 2 packets
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 1 packets
Input #0, rtsp, from 'rtsp://aititx2-2@[email protected]:1935/profile':
  Metadata:
    title           : Unnamed
    comment         : N/A
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 720x1280, 90k tbr, 90k tbn, 180k tbc
    Stream #0:1: Audio: aac (LC), 32000 Hz, stereo, fltp
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[v4l2 @ 0x559b7204e0] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
[v4l2 @ 0x559b7204e0] ioctl(VIDIOC_G_FMT): Invalid argument
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 -- 
Conversion failed!

please help! I use RSTP to connect to my android phone camera.

e-mily avatar May 15 '22 09:05 e-mily