embedded-ai.bench icon indicating copy to clipboard operation
embedded-ai.bench copied to clipboard

benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.

Results 17 embedded-ai.bench issues
Sort by recently updated
recently updated
newest added

你好,我使用你们的repo做了benchmark, MNN在OPENGL(2ms)下的速度是OPENCL(15ms)下的7倍, 想通过打印tensor的值来看这个测试结果是不是有bug 使用如下代码打印tensor,但是运行后却没有显示打印结果,请问你们知道如何办吗? // Warming up... for (int i = 0; i < warmup; ++i) { input->copyFromHostTensor(givenTensor.get()); net->runSession(session); outputTensor->copyToHostTensor(expectTensor.get()); } for (int round = 0; round < loop; round++)...

- commit id版本:2020.9.16, master-00802a87a; - 编译目标平台:armv7-android-gpu编译失败,armv8-android-gpu成功; - 测试模型:caffe_mobilenetv1、caffe_mobilenetv2、tf_mobilenetv1、tf_mobilenetv2; - 机型:855/845/835/990/980/970;

![perf-conv3x3s1-arm64-SDM855](https://media.githubusercontent.com/media/mapnn/mapnn/master/doc/perf-conv3x3s1-arm64-SDM855.jpg) ![perf-conv3x3s2-arm64-SDM855](https://media.githubusercontent.com/media/mapnn/mapnn/master/doc/perf-conv3x3s2-arm64-SDM855.jpg)

- [x] cpu - [x] xnnpack - [x] gpu - [ ] snapdragon-dsp

# Reducing variance between runs on Android. Most modern Android phones use ARM big.LITTLE architecture where some cores are more power hungry but faster than other cores. When running benchmarks...

wontfix

https://github.com/huawei-noah/bolt

enhancement
new framework

https://github.com/PaddlePaddle/Paddle-Lite

enhancement
new framework