detection with yolo not avaiable on latest dl_streamer docker
Running DL streamer using the Docker option from here https://dlstreamer.github.io/get_started/install/install_guide_ubuntu.html#id2 and on a Serpent Canyon Board, using Ubuntu 22.04
i cant see "detection_with_yolo" folder under
Please get the latest DLStreamer Dockerfile, as described here: "https://dlstreamer.github.io/get_started/install/install_guide_ubuntu.html#step-3-download-the-intel-dl-streamer-dockerfile" with wget $(wget -q -O - https://api.github.com/repos/dlstreamer/dlstreamer/releases/latest | \ jq -r '.assets[] | select(.name | contains ("dlstreamer.Dockerfile")) | .browser_download_url').
And build and run it. In my case I called:
docker build -t dlstreamer -f dlstreamer.Dockerfile .
and
docker run -it --net=host --device=/dev/dri --device=/dev/video0 --device=/dev/video1 --group-add=110 -v ~/.Xauthority:/home/dlstreamer/.Xauthority -v /tmp/.X11-unix -e DISPLAY=$DISPLAY -v /dev/bus/usb --rm dlstreamer /bin/bash
Under /opt/intel/dlstreamer/samples/gstreamer/gst_launch I can see the added Yolo samples inside the container:
$> ll
total 48
drwxrwxr-x 12 root root 4096 Apr 26 17:05 ./
drwxrwxr-x 7 root root 4096 Apr 26 17:05 ../
drwxrwxr-x 2 root root 4096 Apr 26 17:05 action_recognition/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 audio_detect/
drwxrwxr-x 2 root root 4096 Apr 26 17:05 detection_with_yolo/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 face_detection_and_classification/
drwxrwxr-x 2 root root 4096 Apr 26 17:05 geti_deployment/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 gvapython/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 human_pose_estimation/
drwxrwxr-x 2 root root 4096 Apr 26 17:05 instance_segmentation/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 metapublish/
drwxrwxr-x 3 root root 4096 Apr 26 17:05 vehicle_pedestrian_tracking/
However, I haven't tested it, yet.
so i tried the cmd from the link and also yours and i got this.... proxy issue
Are you within a corporate network, even behind an Intel proxy ;-) ?
Then you probably need to exclude ".intel.com" via NO_PROXY, no_proxy?
EDIT: You have set http_proxy and https_proxy and set them to "proxy-dmz.intel.com", right? Then you might need to add no_proxy as well
Can you open the file in the browser and copy&paste it into a new file?
i was able to pass the proxy issue but now when building docker i am hitting this
Is it reproducable? Maybe it was a short network problem. Have you tried again? Currently it works for me, also checked in a browser to access "http://archive.ubuntu.com".
What have you changed "to pass the proxy issue"?
i was able to run the full docker build cmd and i can see the new yolo additions, question, where would the model be? i know where the model_proc is but what about the yolov8.xml file?
Please have a look under the new added samples from the latest release: "https://github.com/dlstreamer/dlstreamer/tree/master/samples/gstreamer/gst_launch/detection_with_yolo"
when installing ultranalytics there is not mentioned where these 2 are located
What do you mean, where of which "these 2" do you mean? Do you mean the two scripts, "yolo_download.sh" and "yolo_detect.sh"?
You can find these two scripts under the folder "detection_with_yolo/" in the container, under "/opt/intel/dlstreamer/samples/gstreamer/gst_launch/".
They are coming from this repository, under "https://github.com/dlstreamer/dlstreamer/tree/master/samples/gstreamer/gst_launch/detection_with_yolo".
And with the latest Dockerfile from the last release there is the new folder "detection_with_yolo" in the container.
ok i see we should add that to the readme file as well. So now i am hitting the same issue i had before when i cant create a folder within the docker due to permissions issues
i would need to reroute that to a different writable place.... Im following the steps to create docker etc and still hit these issues....
Hmm, the script could have been the same way as with e.g. the tutorials, i.e. alling to specify a path.
Instead, the script uses something like this: MODEL_PATH="$PWD/public/$MODEL_NAME/FP32/$MODEL_NAME.xml"
=> it looks like you could go to a writable folder - as before, e.g. in your home directory with a subfolder - and then call the script with an absolute path:
cd /home/dlstreamer/temp/models
mkdir myYolo8
cd myYolo8
/opt/intel/dlstreamer/samples/gstreamer/gst_launch/detection_with_yolo/yolo_download.sh yolov8s
=> then the script should create new sub-folders and stores the downloaded and converted files into it.
EDIT: I just tried it in the container:
$> mkdir -p /home/dlstreamer/temp/models/myYolo8
$> cd /home/dlstreamer/temp/models/myYolo8
$> pip install ultralytics
$> export PATH=$PATH:/home/dlstreamer/.local/bin
$> /opt/intel/dlstreamer/samples/gstreamer/gst_launch/detection_with_yolo/yolo_download.sh yolov8s
Downloading and converting: /home/dlstreamer/temp/models/myYolo8/public/yolov8s/FP32/yolov8s.xml
Downloading https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt to 'yolov8s.pt'...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21.5M/21.5M [00:01<00:00, 12.1MB/s]
YOLOv8s summary: 225 layers, 11166560 parameters, 0 gradients, 28.8 GFLOPs
Ultralytics YOLOv8.2.6 🚀 Python-3.10.12 torch-2.3.0+cu121 CPU
YOLOv8s summary (fused): 168 layers, 11156544 parameters, 0 gradients, 28.6 GFLOPs
PyTorch: starting from 'yolov8s.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (21.5 MB)
OpenVINO: starting export with openvino 2024.0.0-14509-34caeefd078-releases/2024/0...
OpenVINO: export success ✅ 2.5s, saved as 'yolov8s_openvino_model/' (42.9 MB)
Export complete (4.4s)
Results saved to /home/dlstreamer/temp/models/myYolo8/public/yolov8s
Predict: yolo predict task=detect model=yolov8s_openvino_model imgsz=640
Validate: yolo val task=detect model=yolov8s_openvino_model imgsz=640 data=coco.yaml
Visualize: https://netron.app
YOLOv8s summary: 225 layers, 11166560 parameters, 0 gradients, 28.8 GFLOPs
Ultralytics YOLOv8.2.6 🚀 Python-3.10.12 torch-2.3.0+cu121 CPU (Intel Core(TM) i7-7567U 3.50GHz)
YOLOv8s summary (fused): 168 layers, 11156544 parameters, 0 gradients, 28.6 GFLOPs
PyTorch: starting from 'yolov8s.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (21.5 MB)
OpenVINO: starting export with openvino 2024.0.0-14509-34caeefd078-releases/2024/0...
OpenVINO: export success ✅ 2.3s, saved as 'yolov8s_openvino_model/' (21.6 MB)
Export complete (4.2s)
Results saved to /home/dlstreamer/temp/models/myYolo8/public/yolov8s
Predict: yolo predict task=detect model=yolov8s_openvino_model imgsz=640 half
Validate: yolo val task=detect model=yolov8s_openvino_model imgsz=640 data=coco.yaml half
Visualize: https://netron.app
$> ll /home/dlstreamer/temp/models/myYolo8/public/yolov8s
total 22064
drwxr-xr-x 4 dlstreamer dlstreamer 4096 May 1 16:43 ./
drwxr-xr-x 3 dlstreamer dlstreamer 4096 May 1 16:42 ../
drwxr-xr-x 2 dlstreamer dlstreamer 4096 May 1 16:43 FP16/
drwxr-xr-x 2 dlstreamer dlstreamer 4096 May 1 16:42 FP32/
-rw-r--r-- 1 dlstreamer dlstreamer 22573363 May 1 16:42 yolov8s.pt
Be prepared to see the same problem when running the script "yolo_detect.sh" - no write permissions to the (hard-coded) paths again.
However, the script prints the pipeline to console before trying (and failing) to execute the pipeline. So you can modify the (hard-coded) paths to change them to your writable (home-)directories.
This is what I did - and see the pipeline working:
$> gst-launch-1.0 urisourcebin buffer-size=4096 uri=https://videos.pexels.com/video-files/1192116/1192116-sd_640_360_30fps.mp4 ! \
decodebin ! vaapipostproc ! 'video/x-raw(memory:VASurface)' ! \
gvadetect model=/home/dlstreamer/temp/models/myYolo8/public/yolov8s/FP32/yolov8s.xml \
model-proc=/opt/intel/dlstreamer/samples/gstreamer/model_proc/public/yolo-v8.json device=GPU \
pre-process-backend=vaapi-surface-sharing ! queue ! gvawatermark ! videoconvertscale ! gvafpscounter ! \
vaapih264enc ! avimux name=mux ! filesink location=/home/dlstreamer/temp/1192116-sd_640_360_30fps_output.avi
And get this output:
actually i did not try the detect but use the following cmd with my camera
and it works!!!
now the question is how can i get 2 pipeline going at the same time camera4 and camera 10
Now it's getting interesting!
Very often I usually started to implement the use-case in a "real application" instead of a gst-launch-1.0 command line... because of many requirements, concurrent sub-use-cases, exchanging information between the two pipelines, de-coupling the processing, synchronizing them later, buffering, queuing, querying etc... Such an application would allow additional performance improvements like combining frames from both pipelines and run inferences in batches, doing higher-level filtering, implementing quality-of-service (like time-out). Not sure what your use-case is... what you are going to do with "all those bottles"... doing inventory, storing them in a database, a check-out-table with payment etc... It gets tricky to do all that in one huge gst-launch-1.0 command-line.
If, however, your two pipelines and the processing of the two streams are totally independent from each other, then you might just call two gst-launch-1.0 pipelines:
$> gst-launch-1.0 <all-the-parameters-of-pipeline #1> & $> gst-launch-1.0 <all-the-parameters-of-pipeline #2> &
(using "&" to put the command-line into the background and start the next command line in parallel)
As usual, start simple... Try to either use two independent command-lines - or get one gst-launch-1.0 to read from two cameras and start to just render them... then add object-detection...
so the plan is to get 3 pipelines in parallel (3 different cameras), i am looking to use scene scape to aggregate all into a single scene
when using the "&" to try to run both i dont get the prompt back as it keeps outputing Redistribute latency... 0:00:02.9 / 99:99:99. 0:00:04.0 / 99:99:99.ckout:~$ 0:00:02.9 / 99:99:99.
so the plan is to get 3 pipelines in parallel (3 different cameras), i am looking to use scene scape to aggregate all into a single scene
Are the streams and what you are going to process for each stream really totally independent?
when using the "&" to try to run both i dont get the prompt back as it keeps outputing Redistribute latency... 0:00:02.9 / 99:99:99. 0:00:04.0 / 99:99:99.ckout:~$ 0:00:02.9 / 99:99:99.
If you added the "&" ampersand to each command line, then you get the prompt, but the console gets "fludded" with output... just keep the ENTER-key pressed and you can shortly see the prompt... You might want to use another fresh console window... Or redirect each command-line's output to "/dev/null" or into a file...
Performance wise it has advantages to use one gstreamer command-line (one process) to allow the plugins to re-use resources (like doing inferences in batches, re-using inference-instances, loading the model only once into the memory)... ... or using one application (Python, C++, using e.g. OpenCV with gstreamer, doing inference using OpenVINO API, using multiple threads, queues, synchronization, etc.)
hmm looks like only one XV port available at a time
so the plan is to get 3 pipelines in parallel (3 different cameras), i am looking to use scene scape to aggregate all into a single scene
Are the streams and what you are going to process for each stream really totally independent?
hmm looks like only one XV port available at a time
What exactly have you done? Attaching two times to the running Docker container to get two consoles...?
following your suggestion $> gst-launch-1.0 <all-the-parameters-of-pipeline https://github.com/dlstreamer/dlstreamer/pull/1> & $> gst-launch-1.0 <all-the-parameters-of-pipeline https://github.com/dlstreamer/dlstreamer/issues/2> &
and piping the touput of 1 to a file so i can get the prompt back and run the 2nd one
Hmm, this is working for me, getting two windows with the test-pattern and a bouncing ball:
gst-launch-1.0 videotestsrc ! xvimagesink 2>&1 > /dev/null &
gst-launch-1.0 videotestsrc pattern=ball ! xvimagesink 2>&1 > /dev/null &
Are you "directly" using your machine or remotely using screen-forwarding?
what is this xvimagesink 2>&1 I am connected to my machines through Remote Desktop
what is this xvimagesink 2>&1 I am connected to my machines through Remote Desktop
This redirects all output of the gstreamer (stderr and stdout) into one stream and then redirects to /dev/null - to completely omit all output.
Do these two command lines work?
gst-launch-1.0 videotestsrc ! xvimagesink 2>&1 > /dev/null &
gst-launch-1.0 videotestsrc pattern=ball ! xvimagesink 2>&1 > /dev/null &
Don't know what you meant with "to use scene scape to aggregate all into a single scene".
You could place the two videos side-by-side like this, there are multiple different ways:
gst-launch-1.0 videotestsrc ! video/x-raw,width=160,height=120 ! m.sink_0 \
videotestsrc ! video/x-raw,width=160,height=120 ! m.sink_1 \
videomixer name=m sink_1::xpos=160 ! video/x-raw,width=320,height=120 ! videoconvert ! xvimagesink
Also a good point to implement an application where you could control what and where to render, composite, z-order, transparency, framerates, foreground, background...
You will find several ways for how to use gstreamer to create a "video wall".
here is what i see when ran the 2 cmds u sent
