MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

Add support for tracing

Open adriangonz opened this issue 5 years ago • 1 comments
trafficstars

Trace each inference step within MLServer. These traces can be pushed to Jaeger or similar OpenTracing backends.

adriangonz avatar Oct 14 '20 09:10 adriangonz

This could be a good chance to explore OpenTelemetry for tracing (and metrics?)

adriangonz avatar Apr 01 '21 14:04 adriangonz

Hi @adriangonz!! I was checking if there was something about traces when I came across this issue… and I would like to know what is the status of this task. Do you intend to address it soon, or is it something more long term?

It would be cool to support this feature 😁.

Thank you!!

pablobgar avatar Nov 17 '22 08:11 pablobgar

Hey @pablobgar ,

Yeah, definitely! It's currently prioritised to go ahead for the next release of MLServer, i.e. 1.3.x :slightly_smiling_face:

adriangonz avatar Nov 17 '22 11:11 adriangonz

That's great news!! Thanks for your reply @adriangonz

pablobgar avatar Nov 29 '22 09:11 pablobgar

Hi @adriangonz ,

I share here some of the stats that I very much needed during my experiments that could be useful to have in mind when implementing this feature: 1. model part latency 3. preprocessing latency (maybe we can add a separate code part in the ML model for that in the custom ML model class so it can be handled separately) 4. decode and encode latency 5. queuing latency 6. Also as mentioned in the closed issue https://github.com/triton-inference-server/server/blob/main/docs/user_guide/trace.md stats from the Triton server could also be useful.

saeid93 avatar Jan 31 '23 00:01 saeid93