dlstreamer
dlstreamer copied to clipboard
This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framewor...
Intel® Deep Learning Streamer (Intel® DL Streamer)
Overview

Intel® Deep Learning Streamer (Intel® DL Streamer) is an open-source streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.
Media analytics is the analysis of audio & video streams to detect, classify, track, identify and count objects, events and people. The analyzed results can be used to take actions, coordinate events, identify patterns and gain insights across multiple domains: retail store and events facilities analytics, warehouse and parking management, industrial inspection, safety and regulatory compliance, security monitoring, and many other.
Backend libraries
Intel DL Streamer is optimized for performance and functional interoperability between GStreamer* plugins built on various backend libraries
- Inference plugins use OpenVINO™ inference engine optimized for Intel CPU, GPU and VPU platforms
- Video decode and encode plugins utilize GPU-acceleration based on VA-API
- Image processing plugins based on OpenCV and DPC++
- Hundreds other GStreamer* plugins built on various open-source libraries for media input and output, muxing and demuxing, decode and encode
This page contains list of Intel DL Streamer elements provided in this repository.
Installation
Please refer to Install Guide for installation options
- Install APT packages
- Run Docker image
- Compile from source code
- Build Docker image from source code
Samples
Samples available for C/C++ and Python programming, and as gst-launch command lines and scripts.
NN models
Intel DL Streamer supports NN models in OpenVINO™ IR and ONNX* formats:
- Refer to OpenVINO™ Model Optimizer how to convert model into OpenVINO™ IR format
- Refer to training frameworks documentation how to export model into ONNX* format
Or you can start from over 70 pre-trained models in OpenVINO™ Open Model Zoo and corresponding model-proc files (pre- and post-processing specification) in /opt/intel/dlstreamer/samples/model_proc folder. These models include object detection, object classification, human pose detection, sound classification, semantic segmentation, and other use cases on SSD, MobileNet, YOLO, Tiny YOLO, EfficientDet, ResNet, FasterRCNN and other backbones.
Reporting Bugs and Feature Requests
Report bugs and requests on the issues page
Other Useful Links
-
Webinars:
- Introduction to Intel DL Streamer: Ready, Steady, Stream: Introducing Intel® Distribution of OpenVINO™ toolkit Deep Learning Streamer
- Audio event detection synchronized with video based object detection using Intel DL Streamer: AI Beyond Computer Vision with the Intel® Distribution of OpenVINO™ toolkit
-
YouTube Videos:
-
The reference media analytics applications, provided by Open Visual Cloud, that leverage Intel DL Streamer elements:
-
Try Intel DL Streamer with Intel® DevCloud:
- You can build your pipeline, test and optimize for free. With an Intel® DevCloud account, you get 120 days of access to the latest Intel® hardware — CPUs, GPUs, VPUs.
- No software downloads. No configuration steps. No installations. Check out Tutorials on Intel® DevCloud.
-
Intel® Edge Software Hub packages that include Intel DL Streamer:
* Other names and brands may be claimed as the property of others.