trc icon indicating copy to clipboard operation
trc copied to clipboard

v0.0.4

Open peterbourgon opened this issue 2 years ago • 7 comments

peterbourgon avatar Sep 29 '23 16:09 peterbourgon

Hey Peter! I'm playing with trc locally and am pretty happy with it so far. Do you intend to keep working on this PR or have you come up with a different solution for this type of problem?

mdlayher avatar May 08 '24 18:05 mdlayher

Wow! My first user! 🥺

This is basically where I left things when I was last working on it with serious attention. Your timing is not bad, I've recently started to poke around (locally) again. When you say "this type of problem" do you mean in-process request tracing in general, or something more specific?

My mental model of the domain and domain-concepts has evolved a bit since the initial commit. I have a vision for a next version, which would be functionally very similar, but would probably use a more industry-standard definition of a trace, and prefer e.g. span to event, and so on. I'm also not sure where the distribution part would fit. But this is all still quite speculative, I've just been noodling on it so far.

peterbourgon avatar May 08 '24 20:05 peterbourgon

When you say "this type of problem" do you mean in-process request tracing in general, or something more specific?

Yep exactly. I have a bit of a monolithic service that I'd like to instrument and trc (+ web interface) fits the bill nicely without having to deploy an otel collector or similar.

I'll be following along and am happy to try out whatever improvements you have in mind.

mdlayher avatar May 08 '24 23:05 mdlayher

Great, thanks for the context. Do you run multiple instances of that service?

peterbourgon avatar May 10 '24 00:05 peterbourgon

Yep I do. A Kubernetes service sits in front of the deployment and determines which pod receives which request. If a request is valid and no distributed locks are held, the request proceeds to use the same pod for the service until the RPC either succeeds or fails.

I can look at logs to see which pod was used for a given RPC, then pull up the trc web view for more details.

mdlayher avatar May 10 '24 14:05 mdlayher

Gotcha. Just FYI, in theory, it's possible for the trc UI on any instance to show trace data from every instance. Look at the trc-complex example for more detail. Oversimplifying: the HTTP server wraps an abstract Searcher, which is usually implemented by a single in-process Collector instance. That server serves the HTML UI, but also a JSON API. The API can be consumed by a client, which also implements the Searcher interface, like a Collector. And there is this MultiSearcher thing (like an io.MultiReader) which implements Searcher over one or more other Searchers. Putting all the pieces together, if you can find a way for each instance to stay up-to-date about its peers, and maintain a set of trc clients for each peer instance, then every instance can create and maintain a MultiSearcher containing all of those peer clients, and mount an e.g. /trc-all endpoint, using an HTTP server that wraps that continuously-updated MultiSearcher. So hitting /trc-all on any instance would give you the same user experience, querying all traces from all instances. (whew) Hopefully that makes sense. If it's useful to you is a separate question :) which I'd be curious to learn more about.

peterbourgon avatar May 10 '24 17:05 peterbourgon

Nice! That would certainly be useful for my use case. I'll have to play around with the packages outside of eztrc.

mdlayher avatar May 10 '24 17:05 mdlayher