indy
indy copied to clipboard
Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch
Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to address any / all of the following:
- Predict performance during performance testing executions
- Identify performance bottlenecks and recommend areas to optimize
- Trigger automated investigations when Indy SLOs are breached
- Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate
We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.