lightweight-neural-architecture-search icon indicating copy to clipboard operation
lightweight-neural-architecture-search copied to clipboard

Converting a mixed-precision quantization model for deployment on MCU

Open erectbranch opened this issue 2 years ago • 0 comments
trafficstars

Thanks for this amazing repo. I'm currently working on training an efficient low-precision backbone and deploying it on an ARM Cortex-M7 MCU device with limited resources (512kB RAM, 2MB Flash). I believe I need to convert the mixed-precision quantization model to a tflite model to achieve this.

Could you please guide how to perform this conversion and deployment? Thanks.

erectbranch avatar Jun 02 '23 04:06 erectbranch