dpinthinker
dpinthinker
> I don't have an STM32F4xx board to test, so I won't be able to do it. If someone does the conversion I can probably get it running in QEMU...
> So someone will need to edit the USART code and then test it (if that's the correct solution). You would want to test it on one of the STM32F4xx...
@alistair23 I just got my new board STM32 Nucleo-F446RE. What can I do next?
> Awesome! So the next step is to convert the [USART driver](https://github.com/tock/tock/blob/master/chips/stm32f4xx/src/usart.rs#L366) to use interrupts instead of DMA. Then test that on the board. Is that something you think you...
> I can try to convert the stm32f4 to non DMA, I think the hardware is similar top the stm32f3. The question is how do you want to do this?...
> What about having a special stm32f4qemu crate where we swap out the uart with a non-DMA uart? I think it is fine to me.
@burui11087 Could you tell me on which condition the quantized size will be bigger than the original one?
> Post-training quantization is a conversion technique that can **reduce model size** while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. From https://www.tensorflow.org/lite/performance/post_training_quantization
@burui11087 Yes, I just try to talk about post training quantization here. Thank you for your careful reviewing. I am a beginner in tflite. If you have found any issue...
@snowkylin maybe we need add training-aware-quantization content later.