Mobile
Mobile copied to clipboard
Embedded and Mobile Deployment
Mobile
Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
Build PaddlePaddle
- Build PaddlePaddle for Android
- Build PaddlePaddle for IOS
- Build PaddlePaddle for Raspberry Pi3
- Build PaddlePaddle for NVIDIA Driver PX2
Demo
- A command-line inference demo.
- iOS demo of PDCamera
Deployment optimization methods
Optimization for the library:
- How to build PaddlePaddle mobile inference library with minimum size.
Optimization for models:
- Merge batch normalization layers
- Compress the model based on rounding
- Merge model's config and parameters
- How to deploy int8 model in mobile inference with PaddlePaddle
Model compression
- How to use pruning to train smaller model
PaddlePaddle mobile benchmark
- Benchmark of Mobilenet
- Benchmark of ENet
- Benchmark of DepthwiseConvolution
This tutorial is contributed by PaddlePaddle and licensed under the Apache-2.0 license.