gostatsd
gostatsd copied to clipboard
Support multiple backends of same type
I'd like to be able to forward metrics to multiple endpoints of the statsdaemon
type. Right now this doesn't appear to be possible.
I was thinking about how it might be achieved while maintaining backwards compatibility. The config file could be changed to look like the following:
[backend statsd1]
type = "statsdaemon"
address = "docker.local:8125"
[backend statsd2]
type = "statsdaemon"
address = "docker.local:8126"
[backend aws]
type = "aws"
max_retries = 4
[graphite] // we'd still support the old-style keys
address = "localhost:2003"
What do you think of this proposal? I'd be happy to submit a PR implementing this if you like it. Also happy to make changes to the design as per your feedback.
I think it makes sense, kinda curious about the use case though. An alternative in this specific case would be to not use the statsdaemon backend type, and implement a new handler. The problem with statsdaemon as a backend is you have this weird mix, where timers have all their values sent, but metrics and gauges are aggregated. All the metrics are delayed in time because they're sent after the flush interval. Overall, there's some loss of fidelity, although that may be of benefit in some cases.
An alternative to this which I've been considering for data capture is to "tee" the output at the Handler level, rather than after aggregation. This would be a new Handler that sits between the before the BackendHandler (if you wanted the pipeline-added tags) or after the Parser (if you don't).
Because it's inline, it would allow for the data to be pushed immediately, rather than after flushing. As it's a new feature it wouldn't have any backwards compatibility concerns.
It could have a file output, statsd output, log output, etc.
The biggest hazard is you get individual metrics through the DispatchMetric pipeline, so 1 datagram with N metrics turns in to N datagrams with 1 metric (unless you pay a lock penalty). That can probably be handled by changing the entire DispatchMetric pipeline to take a []*Metric. Combined with the BatchReader, that means N datagrams with 1 metric each could potentially turn in to 1 datagram with N metrics in it.
I see no problems with your design though, if it's easier and suits your needs, then go for it! I'm likely to add the "tee" in the next couple weeks either way.
Just a note @aidansteele to please see the Contributors section of the README and follow the bit about signing the CLA. It's something we have to check off before we can merge a PR.
https://github.com/atlassian/gostatsd#contributors
Thanks for the interest. We're happy to hear people are using this and want to get involved :)