taskiq
taskiq copied to clipboard
feat: stat_middleware
Prometheus is good, but it uses file operations on each middleware hook; that might become a bottleneck.
Suggested middleware uses similar metrics, but it keeps statistics in memory and may be queried by the simple task when needed.
As soon as a middleware instance starts within a worker process, requesting stats requires getting stats from each worker middleware instance.
I suggest starting an additional worker task inside middleware for a special pub-sub stat broker, so kiqing its task leads to execution on each main worker process, gathering all results together.
This pull request is just an idea implementation and includes:
- metrics classes
- middleware using metrics
- test for metrics
- demo example stats script
I had to bump up the required mypy and black minimal versions to support modern generic syntax.
So I just share the idea of request-oriented statistics gathering via pub-sup broker.