MACE运行mtcnn的Pnet不认PRELU这种activation
Before you open an issue, please make sure you have tried the following steps:
- Make sure your environment is the same with (https://mace.readthedocs.io/en/latest/installation/env_requirement.html).
- Have you ever read the document for your usage?
- Check if your issue appears in HOW-TO-DEBUG or FAQ.
- The form below must be filled.
System information
- OS Platform and Distribution (Linux Mint 19 Cinnamon):
- NDK version(r16b):
- GCC version(gcc 7.3.1):
- MACE version: v0.10.0-37-ga74002cb-20190127:
- Python version(3.6.5):
- Bazel version (0.13.0):
Model deploy file (*.yml)
library_name: mtcnn
target_abis: [arm64-v8a,armeabi-v7a]
model_graph_format: file
model_data_format: file
models:
mtcnn_pnet:
platform: caffe
model_file_path: /idata/workspace/mace-models/det1.prototxt
weight_file_path: /idata/workspace/mace-models/det1.caffemodel
# sha256_checksum of your model's graph and data files.
# get the sha256_checksum: sha256sum path/to/your/file
model_sha256_checksum: 897f48ddfea3f6ae49e1ffa5e1d8db439e7fb44cdcc67bb05e94753064c7afd9
weight_sha256_checksum: d6085e7f48ba7e6b6f1b58964595f6bce5b97bcc4866751f7b4bdc98f920c096
# define your model's interface
subgraphs:
- input_tensors:
- data
input_shapes:
- 1,12,12,3
input_ranges:
- -1.0,1.0
output_tensors:
- prob1
- conv4-2
output_shapes:
- 1,1,1,1
- 1,1,1,4
runtime: gpu
limit_opencl_kernel_time: 0
nnlib_graph_mode: 0
obfuscate: 0
winograd: 0
Describe the problem
尝试对mtcnn的Pnet转成Mace,模型转换成功,但当我运行benchmark测试的时候报错,大概的意思是运行时库不认识prelu这个类别的activation, 但我看了代码,MACE是支持prelu, 是我模型转换的yml哪里有问题吗,还是我编译MACE的运行时库有问题。
To Reproduce
Steps to reproduce the problem:
1. cd /path/to/mace
2. python tools/converter.py benchmark --config_file=/path/to/your/model_deployment_file
Error information / logs
Please include the full log and/or traceback here.
WARNING: linker: /data/local/tmp/mace_run/benchmark_model_static: unused DT entry: type 0xf arg 0x67f
WARNING: linker: /data/local/tmp/mace_run/benchmark_model_static: unsupported flags DT_FLAGS_1=0x8000000
I benchmark_model.cc:199 Model name: [mtcnn_pnet]
I benchmark_model.cc:200 Model_file: /data/local/tmp/mace_run/mtcnn_pnet.pb
I benchmark_model.cc:201 Device: [GPU]
I benchmark_model.cc:202 gpu_perf_hint: [3]
I benchmark_model.cc:203 gpu_priority_hint: [3]
I benchmark_model.cc:204 omp_num_threads: [-1]
I benchmark_model.cc:205 cpu_affinity_policy: [1]
I benchmark_model.cc:206 Input node: [data]
I benchmark_model.cc:207 Input shapes: [1,12,12,3]
I benchmark_model.cc:208 Output node: [prob1,conv4-2]
I benchmark_model.cc:209 output shapes: [1,1,1,1:1,1,1,4]
I benchmark_model.cc:210 Warmup runs: [1]
I benchmark_model.cc:211 Num runs: [100]
I benchmark_model.cc:212 Max run time: [10.0]
I mace.cc:785 Create MaceEngine from model graph proto and weights data
I mace.cc:432 Creating MaceEngine, MACE version: v0.10.0-37-ga74002cb-20190127
I mace.cc:464 Initializing MaceEngine
F conv_2d_3x3.cc:118 Unknown activation type: 3
Aborted (core dumped)
Traceback (most recent call last):
File "tools/converter.py", line 1278, in <module>
flags.func(flags)
File "tools/converter.py", line 1067, in benchmark_model
device.bm_specific_target(flags, configs, target_abi)
File "/idata/workspace/mace/tools/device.py", line 943, in bm_specific_target
link_dynamic=link_dynamic
File "/idata/workspace/mace/tools/device.py", line 851, in benchmark_model
_fg=True)
File "/home/df/anaconda3/lib/python3.6/site-packages/sh.py", line 1413, in __call__
raise exc
sh.ErrorReturnCode_134:
RAN: /usr/bin/adb -s d29a4ba7 shell sh /data/local/tmp/mace_run/cmd_file-mtcnn_pnet-1548678856.661709
STDOUT:
STDERR:
Additional context
Add any other context about the problem here, e.g., what you have modified about the code.
output_shapes: - 1,1,1,1 - 1,1,1,4 改为 output_shapes: - 1,1,1,2 - 1,1,1,4 不知道是不是这个错误
用的哪个版本的mace?
@nolanliou v0.10.0-37, 全卷积网络,不知道MACE是否支持
@kuaikuaikim Is convenient to provide the model files?
@nolanliou of course. i test the standard MTCNN Pnet here MTCNN Model
I check the code, convolution combined the activation operator, but these activation don't support PRELU, but standalone activation operator support PRELU. Maybee you will update the code to support PRELU.
Hi @kuaikuaikim, have you solved this problem?