prom-client icon indicating copy to clipboard operation
prom-client copied to clipboard

AggregatorRegistry assumes all workers will have metrics setup

Open orestis opened this issue 6 years ago • 5 comments

Hello,

thanks for prom-client, it's a huge time saver for us!

I have run into an issue with the cluster aggregator support. It seems that unless all the workers of a node cluster are setup with prom-client, the aggregator won't work.

The issue seems to in https://github.com/siimon/prom-client/blob/bb09c6dea7941f54aa7a62fcfa5f052af9d00d5a/lib/cluster.js#L41-L74

We have workers in our node cluster that do not import prom-client for various reasons, hence I need a way to instruct prom-client to not bother contacting them.

I'm happy to provide a PR for this, but wanted to present possible solutions first:

  1. Instead of rejecting the promise when a timeout happens, silently ignore the timeout.
  2. Same as (1) but report non-responsive workers as a separate metric for monitoring.
  3. Instead of sending the GET_METRICS_REQ to all the workers, maintain a list of workers that have setup the listeners and send only to those.

Thoughts?

orestis avatar Mar 26 '18 15:03 orestis

@zbjornson thoughts?

SimenB avatar Mar 26 '18 15:03 SimenB

Interesting situation...

Re: your possible solutions:

  1. This would silently create wrong metric values where you were expecting workers to respond. E.g. if you were summing CPU or memory usage, you'd see artificial dips that are actually because a worker didn't report.

  2. Seems like a hassle to monitor this auxiliary metric...

  3. Possibly, depending on how it's done. If you did this so that on boot the workers message the master to indicate that they opt-in to metrics, users of this lib would have to ensure that they construct the AggregatorRegistry before forking workers. The opposite would be if there was a discovery phase (default to, say, one minute?) during which the master polls workers to see if they've setup listeners, and if they never issue a response during the discovery phase, don't poll them in the future.

I'll add another option:

  1. (1) but opt-in, something like registry.clusterMetrics({allowPartialAggregation: true}).

zbjornson avatar Mar 26 '18 17:03 zbjornson

I've done a quick implemention for (3) here: https://github.com/siimon/prom-client/pull/182 -- not ready to merge yet but just to foster discussion.

I didn't think of the ordering issues (create registry then fork vs opposite). I'll think about it.

orestis avatar Mar 26 '18 17:03 orestis

I'm trying to think how can this be done in a way that the consumer of the library has control over the initialisation process, so thinking a bit aloud here:

  • The current implementation that contacts all workers has no ordering issue, but suffers when workers are by design non-responsive. However, it still depends on cluster.js being required in all the workers. This is currently done by the top-level index.js, hence require('prom-client') does it.

  • The implementation at #182 expects the AggregatorRegistry to be created in the master before forking, and also that clients require prom-client.

  • I think it's safe to assume that since workers interested in metrics must require prom-client anyway, I think that indeed setting up the cluster support in workers should be implicit as is now. This way worker code doesn't have to worry about if it's running in a cluster or not.

  • The master process of a cluster must explicitly opt-in to using the AggregatorRegistry and exposing the metrics via a web endpoint. Hence the master code is already aware of the cluster in play. In our codebase we try to abstract this away but we do have to check for cluster.isMaster and switch to a different behaviour.

  • The simple approach is to require users of the library to create the AggregatorRegistry before forking (as mentioned already). However, I understand that this will break existing code that expected things to be automatic.

  • Having a discovery phase etc might be a good long term solution that will deal with workers dying. I've already added some code that tries to do the right thing but I haven't tested it thoroughly.

  • Perhaps in the interests of backwards compatibility, this new functionality could be put under some option or some different class altogether e.g. CoordinatedAggregatorRegistry. Users of the current AggregatorRegistry will still get the previous behaviour.

  • I will test a bit more against our codebase and report back.

orestis avatar Mar 27 '18 08:03 orestis

I've updated the PR, this is now deployed in our staging environment and seems to be ticking along nicely. If the general approach is accepted, I can write some tests covering this new option.

orestis avatar Mar 27 '18 10:03 orestis