smart-video-workshop icon indicating copy to clipboard operation
smart-video-workshop copied to clipboard

Learn about the workflow using Intel® Distribution of OpenVINO™ toolkit to accelerate vision, automatic speech recognition, natural language processing, recommendation systems and many other applicati...

Optimized Inference at the Edge with Intel® Tools and Technologies

This workshop will walk you through the workflow using Intel® Distribution of OpenVINO™ toolkit for inferencing deep learning algorithms that help accelerate vision, automatic speech recognition, natural language processing, recommendation systems and many other applications. You will learn how to optimize and improve performance with or without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® Distribution of OpenVINO™ toolkit.

:warning: Labs of this workshop have been validated with Intel® Distribution of OpenVINO™ toolkit 2021.3 (openvino_toolkit_2021.3.394). Some of the videos shown below is based on OpenVINO 2021.2, might be slightly different from the slides, but the content is largely the same. FPGA plugin will no longer be supported by the OpenVINO stardard release, you can find the FPGA content from earlier branches.

Workshop Agenda

  • Intel® Distribution of OpenVINO™ toolkit Overview

    :warning: Please make sure you have gone through all the steps in the Lab Setup, all the Labs below are based on the assumption that user has correctly installed OpenVINO toolkit on the local development system.

  • Model Optimizer

    • Lab1 - Optimize a Caffe* Classification Model - SqueezeNet v1.1
    • Lab2 - Optimize a Tensorflow* Object Detection Model - SSD with MobileNet
  • Inference Engine

    • Lab3 - Run Classfication Sample application with the optimized SqueezeNet v1.1
    • Lab4 - Run Object Detection Sample application with the optimized SSD with MobileNet
    • Lab5 - Run Benchmark App with Hetero plugin
  • Accelerators based on Intel® Movidius™ Vision Processing Unit

    • Lab6 - HW Acceleration with Intel® Movidius™ Neural Compute Stick2
    • Lab7 - Run Benchmark App with Multi plugin
  • Multiple Models in One Application

    • Lab8 - Run Security Barrier Demo Application
  • Deep Learning Workbench

  • Deep Learning Streamer

  • Intel® DevCloud for the Edge

Further Reading Materials

  • Support for Microsoft ONNX runtime in OpenVINO
    • Slides - ONNX runtime and OpenVINO

Disclaimer

Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others