linghusmile
linghusmile
### 问题确认 Search before asking - [X] 我已经查询[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues),没有发现相似的bug。I have searched the [issues](https://github.com/PaddlePaddle/PaddleDetection/issues) and found no similar bug report. ### Bug组件 Bug Component Deploy ### Bug描述 Describe the Bug 1.2 准备预测库中的两种获取方式...
### Before Asking - [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。 - [ ] I want to train my custom dataset, and I have read the [tutorials for training...
./lite/tools/build_android.sh --arch=armv8 --toolchain=clang --with_cv=ON --with_extra=ON --with_opencl=ON --with_arm82_fp16=ON 出现如下报错 In file included from /******/ndk-845-r18b/sources/cxx-stl/llvm-libc++/include/memory:659: /******/ndk-845-r18b/sources/cxx-stl/llvm-libc++/include/limits:189:59: error: invalid operands to binary expression ('float' and 'int') static _LIBCPP_CONSTEXPR const _Tp value = _Tp(_Tp(1)
Bug描述 Describe the Bug 当我在将picodet_m_416模型转为可以部署到高通平台上的model.nb时,因为只跑cpu耗时太多所以想使用opencl加速,我编译了支持opencl的Paddle-Lite预测库,通过指令"./lite/tools/build_android.sh --arch=armv8 --toolchain=gcc --with_cv=ON --with_extra=ON --with_opencl=ON,但是发现耗时还是很久,通过ReadMe发现模型也需要转,通过"./opt --model_file=/home/xxxxx/PaddleDetection-release-2.7/output_inference/picodet_m_416_coco_lcnet/model.pdmodel --param_file=/home/xxxxx/PaddleDetection-release-2.7/output_inference/picodet_m_416_coco_lcnet/model.pdiparams --optimize_out=/home/xxxxx/PaddleDetection-release-2.7/output_inference/picodet_m_416_coco_lcnet/model --valid_targets=opencl" 于是出现了如下问题 Loading topology data from /home/xxxxx/PaddleDetection-release-2.7/output_inference/picodet_m_416_coco_lcnet/model.pdmodel Loading params data from /home/xxxxx/PaddleDetection-release-2.7/output_inference/picodet_m_416_coco_lcnet/model.pdiparams Model is successfully loaded! [W...
我想要复现您的成果,于是下载了您链接里给了模型,使用了PicoDet M 416的模型 是使用的Paddle-LITE,精度使用的是FP16 我在845平台上跑出来的耗时是86ms 不算预处理后处理只算推断部分耗时66ms 和您给出的数据相差很多 请问您是使用了什么加速方式么?