Mobile icon indicating copy to clipboard operation
Mobile copied to clipboard

SSD Demo on Android and iOS

Open Xreki opened this issue 7 years ago • 1 comments

We are planing to build a SSD (Single Shot MultiBox Detector) demo running on Android and iOS. PaddlePaddle has integrated the SSD algorithm and posted an example to demonstrate how to use the SSD model for object detection, https://github.com/PaddlePaddle/models/tree/develop/ssd.

Goals

To show PaddlePaddle's ability on mobile, we choose to run inference of SSD model on Android and iOS with following goals:

  • Build a demo application which can use mobile's camera to capture images and show detected objects to users.
  • Run fast enough to show the results in real-time.

Tasks

  • Training SSD model based on mobilenet, with input image of size 224 x 224 (@NHZlX , 2017-11-13)
  • Anything needed to do on back-end (@Xreki)
    • Basic demo runs on linux to show how to use the ssd model with C-API (https://github.com/Xreki/Mobile/tree/add_ssd_linux_demo/Demo/linux/ssd) .
    • Providing merged model
    • Packing C-API into a class ImageRecoginizer with three inferfaces: init(), infer(), release()
    • Resizing input image
    • Rotating input image (support 0, 90, 180, 270)
    • Transfering HWC to CHW
  • A mobile demo application to show on Baidu World on 2017-11-16 (@nickyfantasy )
    • iOS for high priority
    • Using camera to capture images in real-time
    • Showing the rectangle, category, and score of the detected objects
    • Ready for testing on 2017-11-14

Details

  • Input: pixels of a colored image

    • Shape, 300 x 300 for current vgg based model (224 x 224 for mobilenet bases model).
    • Data type: float
    • Storage format: CHW order, that is [RRRRRR][GGGGGG][BBBBBB]
  • Output

    The inference's output type is paddle_matrix. The height of the matrix is the number of detected objects, and the width is fixed to 7.

    • row[i][0]: the index in a minibatch. For our case, the minibatch is fixed to 1, so row[i][0] always is 0.0.
    • row[i][1]: the label of the object. You can find the label list in https://github.com/PaddlePaddle/models/blob/develop/ssd/data/label_list. If row[i][1] is 15, it means the detected object is a person.
    • row[i][2]: the score of detected rectangle and object.
    • row[i][3] - row[i][6]: (xmin, ymin, xmax, ymax), the relative coordinate of the rectangle.
    $ ./build/vgg_ssd_demo 
    I1107 06:36:18.600690 16092 Util.cpp:166] commandline:  --use_gpu=False 
    Prob: 7 x 7
    row 0: 0.000000 5.000000 0.010291 0.605270 0.749781 0.668338 0.848811 
    row 1: 0.000000 12.000000 0.530176 0.078279 0.640581 0.721344 0.995839 
    row 2: 0.000000 12.000000 0.017214 0.069217 0.000000 1.000000 0.972674 
    row 3: 0.000000 15.000000 0.998061 0.091996 0.000000 0.995694 1.000000 
    row 4: 0.000000 15.000000 0.040476 0.835338 0.014217 1.000000 0.446740 
    row 5: 0.000000 15.000000 0.010271 0.718238 0.006743 0.993035 0.659929 
    row 6: 0.000000 18.000000 0.012227 0.069217 0.000000 1.000000 0.972674 
    
  • Show

    The rectangle, category, and score of the detected objects are wished to be correctly shown, like image

Reference

  1. tensorflow: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
  2. caffe2: https://github.com/bwasti/AICamera
  3. caffe2: https://caffe2.ai/docs/mobile-integration.html

Xreki avatar Nov 07 '17 11:11 Xreki

How to link PaddlePaddle in an iOS application

  • Copy the PaddlePaddle's library to your project root

  • Add the include directory to Header Search Paths image

  • Add the Accelerate.framework or veclib.framework to your project, if your PaddlePaddle is built with IOS_USE_VECLIB_FOR_BLAS=ON

  • Add the libraries of paddle, libpaddle_capi_layers.a and libpaddle_capi_engine.a, and all the third party libraries to your project

  • Set -force_load for libpaddle_capi_layers.a image

Xreki avatar Nov 07 '17 11:11 Xreki