indy icon indicating copy to clipboard operation
indy copied to clipboard

FEATURE: ML model + tools for analyzing Indy production issues

Open jdcasey opened this issue 5 years ago • 0 comments

Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to:

Predict performance during performance testing executions (?) Identify performance bottlenecks and recommend areas to optimize Trigger automated investigations when Indy SLOs are breached Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate

We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.

You may use the programming language of your choice, but the chosen processing framework must run in an OpenShift environment (Kubernetes with restricted container access / privileges).

jdcasey avatar Feb 13 '20 16:02 jdcasey