vmware_exporter
vmware_exporter copied to clipboard
use global metric registry
Use the global REGISTRY, in order :
- add exporter runtime metrics (fix https://github.com/pryorda/vmware_exporter/issues/9 )
- add more metrics later (collection time for each object type)
This patch is intended to expose internal metrics on /metrics only for the default section (so people with multiple sections ?section=something don't get duplicate metrics).
First off, Thanks for your contributions. Is there any way you could add tests for this?
Codecov Report
Merging #128 (8912ac8) into master (8cb2284) will increase coverage by
93.62%. The diff coverage is75.00%.
@@ Coverage Diff @@
## master #128 +/- ##
===========================================
+ Coverage 0 93.62% +93.62%
===========================================
Files 0 4 +4
Lines 0 549 +549
===========================================
+ Hits 0 514 +514
- Misses 0 35 +35
| Impacted Files | Coverage Δ | |
|---|---|---|
| vmware_exporter/vmware_exporter.py | 92.84% <75.00%> (ø) |
|
| vmware_exporter/__init__.py | 100.00% <0.00%> (ø) |
|
| vmware_exporter/defer.py | 97.22% <0.00%> (ø) |
|
| vmware_exporter/helpers.py | 100.00% <0.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update 8cb2284...1b5af5e. Read the comment docs.
@pryorda ,
The prometheus client library adds many metrics (plus process metrics on Linux).
I was about to create a unit test to check python_info only (value and labels).
Also, I was considering to a a test to ensure those metrics are published on the default section only.
Do you think we need more tests ?
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 633.0
python_gc_objects_collected_total{generation="1"} 301.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 191.0
python_gc_collections_total{generation="1"} 17.0
python_gc_collections_total{generation="2"} 1.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="2",version="3.7.2"} 1.0
That should be fine.
Sent from ProtonMail mobile
-------- Original Message -------- On Jul 26, 2019, 6:07 PM, Frank lin Piat wrote:
@pryorda ,
The prometheus client library adds many metrics (plus process metrics on Linux).
I was about to create a unit test to check python_info only (value and labels). Also, I was considering to a a test to ensure those metrics are published on the default section only.
Do you think we need more tests ?
HELP python_gc_objects_collected_total Objects collected during gc
TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 633.0 python_gc_objects_collected_total{generation="1"} 301.0 python_gc_objects_collected_total{generation="2"} 0.0
HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0 python_gc_objects_uncollectable_total{generation="1"} 0.0 python_gc_objects_uncollectable_total{generation="2"} 0.0
HELP python_gc_collections_total Number of times this generation was collected
TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 191.0 python_gc_collections_total{generation="1"} 17.0 python_gc_collections_total{generation="2"} 1.0
HELP python_info Python platform information
TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="2",version="3.7.2"} 1.0
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Codecov Report
Merging #128 (a509e07) into main (802240c) will increase coverage by
1.34%. The diff coverage is86.04%.
@@ Coverage Diff @@
## main #128 +/- ##
==========================================
+ Coverage 80.29% 81.64% +1.34%
==========================================
Files 4 4
Lines 873 888 +15
==========================================
+ Hits 701 725 +24
+ Misses 172 163 -9
| Impacted Files | Coverage Δ | |
|---|---|---|
| vmware_exporter/defer.py | 97.22% <ø> (ø) |
|
| vmware_exporter/helpers.py | 72.72% <20.00%> (+11.53%) |
:arrow_up: |
| vmware_exporter/vmware_exporter.py | 81.48% <94.59%> (+0.39%) |
:arrow_up: |
| vmware_exporter/__init__.py | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update a202941...a509e07. Read the comment docs.
Hello Daniel,
Does this look good to you @pryorda ?
Hey @finkr,
Appreciate the work on this. Ill work on getting it tested and updated.
Thanks for your patience on this
Hello Daniel,
I looked at other issues and pull request (making the default section optional).
Also the exporters should always return a page with the exporters metrics , and avoid http/500 errors with empty page. That's not easy to do with a single page with both VMware metrics and the exporters metrics
I am seriously considering to move the exporters own metric to a separate page .
This means users would have to configure two or more targets in Prometheus.
Sorry for the delay. What would happen if you hit the endpoint multiple times quickly?
Sorry for the delay. What would happen if you hit the endpoint multiple times quickly?
@pryorda , Handling a single prometheus registry containing both vmware metrics AND the the exporter own metrics proves to be quite challenging, especially when it comes to ensure that we provide technical metrics even when an exception is raised when scraping the vCenter.
I have some work going on to split the exporter's own metrics in a separate endpoint (which means users will have to configure two prometheus jobs. And they may forget to do so unfortunately)
I'll try to submit that work soon.
Sorry for the delay. I'm def not a fan of moving to two endpoints. Do you know if there is a best practice for service metrics vs vmware metrics?
For those exporters that are designed to expose metrics from multiple targets, it seems common to have 1 endpoint to expose the exporter own metrics, and another endpoints for targets.
| Exporter name | own metrics | target metrics |
|---|---|---|
| snmp_exporter | /metrics | /snmp |
| blackbox_exporter | /metrics | /probe |
| json_exporter | /metrics | /probe |
| influxdb_exporter | /metrics/exporter | /metrics |
| openstack-exporter | /metrics | /probe |
| modbus exporter | /metrics | /modbus |
Some exceptions I fount, where all metrics are exposed at once : statsd_exporter, pushgateway,
ipmi_exporter is a bit different because local ipmi interface are exposed under /metrics (along with the exporter own metrics), whereas remote ipmi target are exposed as /impi?target=