embarc_mli
embarc_mli copied to clipboard
do you have plan to optimize leaky_relu and tanh op for tflm?
We are using himax board to run our custom model that uses leaky_relu and tanh ops on arc processor and currently we are running on tflm with C reference code and it takes lot of cycles to run inference, so could you please accelerate these ops on TFLM.
We are in the process of enabling the below kernel for TFLM on HiMax WE1 board as one of our NN models uses the Tanh kernel heavily. So we like to accelerate Tanh on ARC processors. mli_status mli_krn_tanh_fx8(const mli_tensor * in, mli_tensor * out); Do you plan to support this kernel for TFLM? If not please give us guidance to accelerate this kernel on TFLM for ARC EM9 processor(Himax WE1 board).
@mfarag13, @JaccovG, @Hakim7267 Please comment on this.. Actually I have ported mli_krn_leaky_relu_fx8(const mli_tensor * in, mli_tensor * slope_coeffs,mli_tensor * out) to tflm,however it is not outputting expected result. if you are open to provide the feedback, I can share the patch which does leaky_relu acceleration on ARC processor on TFLM.
Hi, Feel free to share the patch, I can review it.
@JaccovG Please refer the attached patch along with changed leaky_relu files https://drive.google.com/drive/folders/1lzRuglfxr4QXm_H2NRj3bwYyux_ZL42t?usp=sharing
Here is output of one test from tflm leaky_relu_test.cc
Entering to prepare of is_mli_applicable params->alpha 1.0*2^-1
fixed 8: 0x40
tensor->data.int8:0x40
Exiting to prepare of is_mli_applicable Inside LeakyReluEval params->alpha:1.0*2^-1 Converted to Q7 fixed point tensor->data.int8:0x40
Entering EvalMLI
params->alpha:1.0*2^-1 mli tensor of slope coeffs Q7 fixed point :0x40 res:0
Exiting EvalMLI
expected_data[i] (1.02^0) near output_data[i] (1.59999932^1) failed at examples/kernel_add_test/add_test.cc:103 expected_data[i] (1.49999992^1) near output_data[i] (1.14999942^3) failed at examples/kernel_add_test/add_test.cc:103 expected_data[i] (1.02^0) near output_data[i] (1.59999932^-3) failed at examples/kernel_add_test/add_test.cc:103 expected_data[i] (-1.02^-1) near output_data[i] (1.59999932^1) failed at examples/kernel_add_test/add_test.cc:103 expected_data[i] (-1.02^0) near output_data[i] (1.02^-127) failed at examples/kernel_add_test/add_test.cc:103 Testing QuantizedActivationsOpTestLeakyReluInt8_2
@JaccovG Could you please review patch and let us know your inputs
@JaccovG Gentle reminder.!
I'm not able to access the google drive. could you share it as a github commit? or as a PR?
@JaccovG https://github.com/usefulsensors/for_synopsys_review/blob/main/0001-UsefulSensors-Leanky_relu-optimization-for-ARC.patch Please review leaky_relu.cc,leaky_relu_common.cc and leaky_relu.h https://github.com/usefulsensors/for_synopsys_review
@JaccovG Gentle reminder.!
@JaccovG Gentle reminder.!
sorry for my late reply, I was very busy. I had a look at your code, and when you set the slope tensor, you force the exponent to 7. https://github.com/usefulsensors/for_synopsys_review/blob/main/leaky_relu.cc#L91 I don't know the reason for setting it to 7, but maybe the problem is related to how the slope tensor is constructed.
@JaccovG I have created q7 format for slope tensor,if this is something wrong could you please suggest. What is the correct implementation for this
I couldn't quickly find how you did the conversion. it is fine to use q7 format as long as you shift the mantissa to match the exponent of 7. So what you need to check is if the slope value is correctly converted to the fixedpoint value.
@JaccovG Pls refer here for conversion part
https://github.com/usefulsensors/for_synopsys_review/blob/main/0001-UsefulSensors-Leanky_relu-optimization-for-ARC.patch#L983
@JaccovG as well pls refer the below code to construct slope_tensor https://github.com/usefulsensors/for_synopsys_review/blob/main/0001-UsefulSensors-Leanky_relu-optimization-for-ARC.patch#L998