Mycodo
Mycodo copied to clipboard
Distributed//Consolidated services. Is `mycododaemon` the only service that actually needs to run on the Pi?
Is your feature request related to a problem? Please describe. This query arises from performance bottlenecks I'm experiencing on individual Pis (i.e. slow UI, proportional with the number of inputs/outputs and their utilization bandwidth).
Describe the solution you'd like
It's quite obvious that mycodoinfluxdb
can run on a separate machine, and indeed that's how I have my cluster setup. The handful of raspberry pis around my warehouse all talk to one influxdb
on my desktop machine.
For now I have mycodoflask
mycodonginx
and mycododaemon
running on each Pi, but this seems redundant. I am assuming that mycododaemon
is the only one that actually interfaces hardware, and therefore needs to be on the individual Pi. Is this correct?
I'm not sure of the specific implications or details of running nginx
and flask
elsewhere on the network (i.e. on my desktop, in a public cloud, etc.). I.e. one nginx
instance is sufficient, but perhaps you still need a 1:1 of a daemon
and a flask
(even if flask
can live elsewhere)
Additional context All of this conjecture is specifically in the context of my on prem k3s cluster of Mycodo instances, but I imagine this could apply just the same to "normal" (non-clustered) dockerized installs.
Cheers!
Sorry for the late reply. Yes, the daemon is the only one that needs to run. The frontend is merely there to change settings in the database that the daemon queries. I've not attempted in a long time to separate the frontend from the daemon on different systems (there's a rudimentary and experimental remote admin system that's not normally visible to users, allowing one frontend to show data from multiple backends, but no control aspects implemented).