client_java
client_java copied to clipboard
Redesign of Collector architecture.
Redesign of collector-related concepts to simplify the library.
Tries to overcome problems described in #901.
PR tries to somehow keep compatible with old api as much as possible, so many things can be still improved.
What has been done:
- single
Collector
interface - registry implementing Collector
- removed name-based pre-filtering
- introduced composite collectors, collector wrappers
What is to be done:
- I'm not java programmer; this is first piece of java code I wrote since years
- so most probably I do some things not the way it should be; like using correct collection types, etc
- also probably should not use recent java features that much
- added few tests, but more is needed
- JvmMetrics and other subprojects should be reviewed to utilize new features; just updated JvmBufferPool as example
- find good place for collector builders and transformations (in scala I'd put that in Collector's companion object, I dont know how its usually done in java).
- expose transformations directly on collector instance
- make sure all the consumers (samples, http server interfaces, adapters) use generic
Collector
rather thanPrometheusRegistry
.
What I'd consider to improve in future:
- merge implementation paths for different types of counters; this forces huge code duplication unnecessarily
- get rid of unnecessary copying and validations (eg in all DataPoint collection constructors)
- implement shortcut paths for identity transformations
- have common builder for Counter and CounterWithCallback
- simplify builders
- simplify Metrics/MetricsWithFixedMetadata (Metrics interface is not really used/needed)
The main obstacle is that since we released the 1.x version we should maintain backwards compatibility.
I think it will be hard to maintain those updates backward compatible. Its possible to some extent, but still you would end up with something in between. Rather than that I'd just start 2x (or 1.5 or whatever) and for some time maintain those in parallel. You said 1.1 is young - and I believe many users did not yet even start to migrate - they will simply begin with new one - so backward compatibility with 1.1 is not an issue for them.
See how softwaremill does it: https://mvnrepository.com/artifact/com.softwaremill.sttp.client https://mvnrepository.com/artifact/com.softwaremill.sttp.client3 https://mvnrepository.com/artifact/com.softwaremill.sttp.client4
And notice - while binary compatibility is somewhat problematic, the source code compatibility is already quite good. Its matter of just some minor tweaks to get 1.1 code migrated to after PR.
We can discuss some details - but I wouldn't first try to split it into separate PRs - mainly because some of features are impossible (or hard) to implement with current 1.1 design.
m.
So I guess depending on the scenario there are different ways of implementing merge(), and I'm not sure if one of them should be in the Prometheus data model.
I was just trying to solve my problem, which was using even more strict merging that fails on any name duplicate. Actually I believe most of users would be happy with approach described in OpenMetrics, and I think this is what prometheus actually does (although this is not part of specs) when scraping. And this is why everything worked fine with 0.16 which did not bother validating and just passed multiple metrics to prometheus, which nicely merged everything - and what is impossible with 1.1.
If this depends on destination format - maybe the name validation could be a) made as weak as possible (just validate name+type, what seems quite reasonable) b) additionally enforced in the very end by exporter which knows destination format - it can make it bit more strict if really needed