onnxruntime_flutter
onnxruntime_flutter copied to clipboard
A flutter plugin for OnnxRuntime provides an easy, flexible, and fast Dart API to integrate Onnx models in flutter apps across mobile and desktop platforms.
OnnxRuntime Plugin
Overview
Flutter plugin for OnnxRuntime via dart:ffi
provides an easy, flexible, and fast Dart API to integrate Onnx models in flutter apps across mobile and desktop platforms.
Platform | Android | iOS | Linux | macOS | Windows |
---|---|---|---|---|---|
Compatibility | API level 21+ | * | * | * | * |
Architecture | arm32/arm64 | * | * | * | * |
Key Features
- Multi-platform Support for Android, iOS, Linux, macOS, Windows, and Web(Coming soon).
- Flexibility to use any Onnx Model.
- Acceleration using multi-threading.
- Similar structure as OnnxRuntime Java and C# API.
- Inference speed is not slower than native Android/iOS Apps built using the Java/Objective-C API.
- Run inference in different isolates to prevent jank in UI thread.
Getting Started
In your flutter project add the dependency:
dependencies:
...
onnxruntime: x.y.z
Usage example
Import
import 'package:onnxruntime/onnxruntime.dart';
Initializing environment
OrtEnv.instance.init();
Creating the Session
final sessionOptions = OrtSessionOptions();
const assetFileName = 'assets/models/test.onnx';
final rawAssetFile = await rootBundle.load(assetFileName);
final bytes = rawAssetFile.buffer.asUint8List();
final session = OrtSession.fromBuffer(bytes, sessionOptions!);
Performing inference
final shape = [1, 2, 3];
final inputOrt = OrtValueTensor.createTensorWithDataList(data, shape);
final inputs = {'input': inputOrt};
final runOptions = OrtRunOptions();
final outputs = await _session?.runAsync(runOptions, inputs);
inputOrt.release();
runOptions.release();
outputs?.forEach((element) {
element?.release();
});
Releasing environment
OrtEnv.instance.release();