armnn icon indicating copy to clipboard operation
armnn copied to clipboard

WARNINGS of GATHER rise up when running a tflite model

Open liamsun2019 opened this issue 2 years ago • 3 comments

Hi author, I am running a tflite model with CpuAcc as the backend under cortex-A55. The following warnings are output: WARNING: TRANSPOSE: not supported by armnn: in validate_arguments src/cpu/kernels/CpuPermuteKernel.cpp:113: Objects have different dimensions WARNING: GATHER: not supported by armnn: in validate_arguments src/core/NEON/kernels/NEGatherKernel.cpp:60: input->num_dimensions() > 4

Such warnings disappear when I switch to C++ parser model. Meanwhile, the inference time for Parser and Delegate modes are 170ms .vs. 220ms, per frame. The parser mode is faster than delegate mode.

My questions are: a. For delegate mode, what will happen if gather op does not support input with dim>4 ? b. As far as this use case is concerned, why parser mode is much faster than delegate mode? For common cases, delegate mode is supposed to be faster than parser mode, right?

Thanks for your time.

liamsun2019 avatar Jul 11 '22 07:07 liamsun2019

Hi @liamsun2019,

Presuming you are running on CpuAcc in both cases, I would have thought src/core/NEON/kernels/NEGatherKernel.cpp:60 would output warnings in both parser and delegate cases if Gather num_dimensions > 4.

  1. For delegate, if CpuAcc is being used and num_dimensions > 4, I believe ArmNN will (try) fallback to CpuRef (if enabled) and then tflite runtime for that operator. If fallback is not supported in the latter: an insightful error should be thrown informing the user and the program will exit with no model being run. I believe in ArmNN the number of operator dimensions depends on the backend rather than parser vs. delegate.
  2. Without knowing a huge amount about about the model, I am making guesses here. From the warnings received when running the delegate, it seems there has been a far amount of fallback to the CpuRef and/or tflite runtime which is going to incur some time cost. In the case of the parser there seems to be no indication of fallback (surprising if you are running CpuAcc in both cases) which means the parser will never need to incur any time cost falling back to only CpuRef.

Kind Regards, Cathal.

catcor01 avatar Jul 12 '22 12:07 catcor01

Hi @catcor01, Big thanks for your comment. The board on which I conduct my tests are supposed to have no mali gpu. It uses Imagination instead.

a. Both cases (parser and delegate) show the following warning message as what's expected: Can't load libOpenCL.so: libOpenCL.so: cannot open shared object file: No such file or directory Can't load libGLES_mali.so: libGLES_mali.so: cannot open shared object file: No such file or directory Can't load libmali.so: libmali.so: cannot open shared object file: No such file or directory Couldn't find any OpenCL library.

I think that explains why CpuAcc is applied.

b. The gather op warnings only exist for delegate case. I think your analysis makes sense since the "fallback" incurs some overhead.

c. Please refer to the attachment for the model details.

BTW, is there any explicit way to set multithreading (e.g, threads=4) for inference? I think it's implemented internally in compute library and just want to try multithreading to check the performance. test.zip

liamsun2019 avatar Jul 13 '22 00:07 liamsun2019

Hello @liamsun2019,

If you are using ExecuteNetwork, multi-threading can be done like so: ./ExecuteNetwork -m u8l.tflite -f tflite-binary --tflite-executor delegate -c CpuAcc -i X.1 -o 2180 --number-of-threads 1 --iterations 10

Also see #636.

Kind Regards, Cathal.

catcor01 avatar Jul 19 '22 15:07 catcor01

Hi @liamsun2019, has enough information been provided in this ticket? I will close this at the end of the week if do not hear back from you. Thank you very much.

Keith

keidav01 avatar Sep 20 '22 14:09 keidav01

Hi @keidav01 ,

I have not yet conducted tests about this issue. I think you can just close it and I will feedback ASAP. Thanks for your help.

Liam

liamsun2019 avatar Sep 21 '22 00:09 liamsun2019

Sure, thank you @liamsun2019. Closing

keidav01 avatar Sep 21 '22 08:09 keidav01