AI-on-the-edge-device icon indicating copy to clipboard operation
AI-on-the-edge-device copied to clipboard

Distibuted compute

Open stefanh12 opened this issue 1 year ago • 10 comments

The Feature

I've been using AI-on-the-edge-device for a year now and its really great! It would be great if we could offload the processing of the image since it takes quite a long time to either multiple esp32 devices or to a docker container on HA or such. multiple esp32 devices could then split the digital / analog up and really speed up the handling. On HA as a container it would I guess only take seconds to process too.

stefanh12 avatar Nov 22 '23 09:11 stefanh12

But what will you save with this?

My devices have an interval of 2 minutes which is already far more what effectively is needed. Also it will not save you any power as the ESP32-CAM does not support power saving.

A goal of this project is to make usage as easy as possible for non-techy people. Having to learn about docker is way beyond most users will want to look into.

BTW, @jomjol originally had offline processing in docker but moved it all to the ESP.

caco3 avatar Nov 22 '23 22:11 caco3

For me running at 240mhz each processing takes 3 minutes with analysis of 4 digits and 4 analog clocks. I would like to speed this up since 3-4 minutes delay equals to around 70-90 liters of hot water when both kids shower. If we could distribute compute between 2-4 esp devices it would be 1,5 to less than a minute, resulting in better resolution and the possibility to take action before hot water is all spent.

Power management on an esp not running on battery is not really an issue

stefanh12 avatar Nov 23 '23 05:11 stefanh12

@stefanh12: If you'd like to speed up the process it is possible to optimize this without adding addtional computing power / complexity.

The following things affect the processing time.

  • Take image / flash time (default: 5s) -> could be reduced to save some seconds
  • Alignment time -> EITHER: It it possible to switch off the alignment algo and use just preset angle correction (alignment algo: off) -> OR: Reduce expert parameter 'search field X /Y' -> time reduction is quadratic to field area -> OR: The smaller the alignment marker the quicker the processing
  • Image evaluation time: Use modern quantized tflite models (e.g. dig-class100_...q.tflite, ana-class100..._q.tflite)

Just for reference:

  • I have 3 digits and 2 analogs + activated alignment (160Mhz) -> Processing time: 16s

image

Slider0007 avatar Nov 23 '23 13:11 Slider0007

@Slider0007: Would be in my opinion good to add this in the wiki. So if someone has needs to speed up, he can find this parameter setting szenario quickly.

Edit: I have not done any of the recommendations above and my Round Completed time is in average 35 sec

friedpa avatar Nov 23 '23 14:11 friedpa

That helped a lot, processing time is now way lower!

stefanh12 avatar Nov 23 '23 14:11 stefanh12

Do improve it further you can set the CPU frequency to 240MHz

friedpa avatar Nov 23 '23 15:11 friedpa

Would be in my opinion good to add this in the wiki.

We are always open for support. I am also willing to show how to extend the documentation!

I have a device which processes 3 digits and 4 pointers, taking 50s at 160 MHz

caco3 avatar Nov 23 '23 17:11 caco3

Ok, let me know how it works.

friedpa avatar Nov 23 '23 17:11 friedpa

can you have a look on https://github.com/jomjol/AI-on-the-edge-device-docs/blob/main/README.md for an introduction? Afterwards we could do an online meeting to clarify open questions.

caco3 avatar Nov 23 '23 17:11 caco3

Ok, mache ich...

friedpa avatar Nov 23 '23 19:11 friedpa