Fix/prometheus metadata sorting
Description
This PR adds Prometheus metadata to all metrics types. Therefor Prometheus can include the TYPE and HELP information when scraping the endpoint.
See more details in #12110.
Fixes: #12110
Types of changes
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
- [ ] Build/CI
- [ ] Test (unit or integration test code)
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
- [ ] Major
- [x] Minor
Bug Severity
- [ ] BLOCKER
- [ ] Critical
- [ ] Major
- [x] Minor
- [ ] Trivial
Screenshots (if appropriate):
How Has This Been Tested?
I've tested the code change with a simplistic setup (didn't got all running, @abh1sar could you test it together more fully with your extra Prometheus metrics as my dev environment is still not working correctly) I've successfully seen that the TYPE and HELP information is added to the nicely sorted list of metrics the exporter provides.
How did you try to break this feature and the system with this change?
I've enabled the Prometheus exporter and queried the endpoint multiple times to see the data came out like this:
# Cloudstack Prometheus Metrics
# HELP cloudstack_domain_limit_cpu_cores_total Total CPU core limit across all domains
# TYPE cloudstack_domain_limit_cpu_cores_total gauge
cloudstack_domain_limit_cpu_cores_total 0
# HELP cloudstack_domain_limit_memory_mibs_total Total memory limit in MiB across all domains
# TYPE cloudstack_domain_limit_memory_mibs_total gauge
cloudstack_domain_limit_memory_mibs_total 0
# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="memory"} 0
cloudstack_domain_resource_count{domain="/", type="cpu"} 0
cloudstack_domain_resource_count{domain="/", type="gpu"} 0
cloudstack_domain_resource_count{domain="/", type="primary_storage"} 0
Congratulations on your first Pull Request and welcome to the Apache CloudStack community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/cloudstack/blob/main/CONTRIBUTING.md) Here are some useful points:
- In case of a new feature add useful documentation (raise doc PR at https://github.com/apache/cloudstack-documentation)
- Be patient and persistent. It might take some time to get a review or get the final approval from the committers.
- Pay attention to the quality of your code, ensure tests are passing and your PR doesn't have conflicts.
- Please follow ASF Code of Conduct for all communication including (but not limited to) comments on Pull Requests, Issues, Mailing list and Slack.
- Be sure to read the CloudStack Coding Conventions. Apache CloudStack is a community-driven project and together we are making it better 🚀. In case of doubts contact the developers at: Mailing List: [email protected] (https://cloudstack.apache.org/mailing-lists.html) Slack: https://apachecloudstack.slack.com/
@blueorangutan package
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Codecov Report
:x: Patch coverage is 0% with 43 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 17.56%. Comparing base (6dc259c) to head (8851dd9).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ...che/cloudstack/metrics/PrometheusExporterImpl.java | 0.00% | 43 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #12112 +/- ##
=========================================
Coverage 17.55% 17.56%
- Complexity 15535 15541 +6
=========================================
Files 5911 5911
Lines 529359 529377 +18
Branches 64655 64656 +1
=========================================
+ Hits 92949 92993 +44
+ Misses 425952 425924 -28
- Partials 10458 10460 +2
| Flag | Coverage Δ | |
|---|---|---|
| uitests | 3.58% <ø> (ø) |
|
| unittests | 18.63% <0.00%> (+<0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15807
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests
[SF] Trillian Build Failed (tid-14851)
[SF] Trillian Build Failed (tid-14866)
[SF] Trillian test result (tid-14870) Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8 Total time taken: 49860 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12112-t14870-kvm-ol8.zip Smoke tests completed. 150 look OK, 0 have errors, 0 did not run Only failed and skipped tests results shown below:
| Test | Result | Time (s) | Test File |
|---|
@kiranchavala @NuxRo , can you guys have a look at this?
@Sinscerly , given comment https://github.com/apache/cloudstack/pull/12112#discussion_r2618715937, will you work on this more or do you want this merged as is? cc @kiranchavala @NuxRo @nvazquez