edgetpu
edgetpu copied to clipboard
Leaky_relu -> tf.maximum(x, tf.multiply(0.1, x))
As presented in #165 I tried to implement leaky_relu as title:
x = tf.maximum(x, tf.multiply(0.1, x))
The conversion to tflite works fine, giving this model:
I can clearly see quantize ops BEFORE the Maximum operation.
When trying to compile with Edge TPU Compiler version 14.1.317412892, I get a simple error:
Internal compiler error. Aborting!
Is it because of the quantize ops? Maybe I cannot use 0.1 as constant in multiply?
Can you provide a whole reproducible code?
I have no idea how the compiler works but based on my experience it does not like quantize ops. You should try to convert so that there is no such op in the resulting tflite. However, seeing your tflite model to be 8MB big, I would suspect that size of the model is another issue that compiler will not like. Most of my models end up with max of about 0.6MB and then the compiler will refuse to compile them because it is unable to layout intermediate tensors during computation. The limit however greatly depends on the model. So, I would recommend starting with very small models and slowly scaling them up until you hit the limits ....
@ppershing it isn't just the model size, the input size for each layer also matter. It is an issue with limited ram in the tpu, unfortunately
@ppershing it isn't just the model size, the input size for each layer also matter. It is an issue with limited ram in the tpu, unfortunately
Hi @Namburger, I am currently exploring yolo on edgetpu, and I have read Q&A regarding the yolo issues on edgetpu. One thing I don't understand is whether the SRAM on TPU is used only for storing model parameters or it is used for storing both input feature map and model parameters. According to the description on coral's website, if I understand correctly, the SRAM is only used for storing model parameters, and the input feature map of each layer will be read from DRAM. In this case, I don't understand why the input size for each layer also matters?
As presented in #165 I tried to implement leaky_relu as title:
x = tf.maximum(x, tf.multiply(0.1, x))
The conversion to tflite works fine, giving this model:
I can clearly see quantize ops BEFORE the Maximum operation.
When trying to compile with Edge TPU Compiler version 14.1.317412892, I get a simple error:
Internal compiler error. Aborting!
Is it because of the quantize ops? Maybe I cannot use 0.1 as constant in multiply?
Closing the issue as the model is able to compile with the latest compiler. Thanks!!