Vikram Dattu
Vikram Dattu
@WrinkLeeTR Currently, esp-nn can work only with tflite-micro model format. Convert your model to the tflite-micro format and fit it into a format: https://github.com/espressif/esp-tflite-micro `esp-nn` optimisations would then be automatically...
In the `main_functions.cc`° file, you can increase TensorArena size and allocate the buffer on SPIRAM. s3-eye has 4MB of SPIRAM.
@AIWintermuteAI yes, the optimizations will be included in esp-nn for esp32-p4. Once added, these will get included transparently to esp-tflite-micro. Unfortunately, I do not have any ETA at the moment...
@nicklasb we have just pushed the first-cut support for ESP32-P4. - Enabled generic optimisations from esp-nn for esp32p4 chip - Convolution function is optimised using inline assembly With these optimisations...
Hi @nicklasb you're right. This is just a start. The optimizations are at an early stage, and that too only conv function optimized right now. This will definitely be better...
> Great, if you report on what operations are being optimized I can help out with up performance numbers and profiling here. Because from what I've gathered from the marketing...
@nicklasb thanks for the clarification. Actually, ESP32-S3 also has AI/vector instructions. The one included in P4 is a bit refined version, however. Please find a patch to optimise the convolution...
Hi @nicklasb I used an existing model at GH user already exported from [here](https://github.com/muhammedakyuzlu/yolov5-tflite-cpp) which fits our requirement. I used the guide [here](https://docs.ultralytics.com/modes/export/#usage-examples) to achieve the 2.7MB model with official...
Hello @AIWintermuteAI thanks for reporting yet another issue. Please find the patch attached and let me know if this fixes the issue. [memory_write_overflow.patch](https://github.com/espressif/esp-nn/files/12831488/memory_write_overflow.patch)
Hi @AIWintermuteAI the fix has been pushed to the `esp-nn` repo.