aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Add metrics for managed load balancer resources
Issue
https://github.com/kubernetes-sigs/aws-load-balancer-controller
Description
I'm adding metrics so that users can view which Kubernetes services and ingresses are annotated for the aws-load-balancer controller. When services and ingresses are annotated properly, the corresponding load balancer is created an a gauge counts how many of these exist.
The prometheus gauge is tagged with:
- the name of the load balancer
- the type of load balancer (network or application)
- the namespace of the Kubernetes service/ingress
- the name of the Kubernetes service/ingress

Checklist
- [ ] Added tests that cover your change (if possible)
- [ ] Added/modified documentation as required (such as the
README.md
, or thedocs
directory) - [x] Manually tested
- [x] Made sure the title of the PR is a good description that can go into the release notes
BONUS POINTS checklist: complete for good vibes and maybe prizes?! :exploding_head:
- [ ] Backfilled missing tests for code in same general area :tada:
- [ ] Refactored something and made the world a better place :star2:
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: drewgonzales360 / name: Drew Gonzales (3331b78d883cc8699a40ec295d8fff47a6ad012a, 7f1badd92ffcc91165db030c593c1b3f573b02bd, 44bb0688c9955cdeddf8dfc84aa1192032cb65af, b2d0d455d93e2903af232b219d3b80de5de7f3da, 0ec1c16739bdf10ecc83f8bc4708857be250648a, 8e0fb21fae152163afadcc19f7f69e7c5f6fc4fa)
Welcome @drewgonzales360!
It looks like this is your first PR to kubernetes-sigs/aws-load-balancer-controller 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/aws-load-balancer-controller has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @drewgonzales360. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I don't quite see how this is worth the metrics cardinality it uses. It seems likely the report could be generated simply by querying the Kubernetes API for resources of the appropriate types and examining their statuses.
@johngmyers for one cluster, yeah I agree that asking the API server is fine. But when you run hundreds of clusters, it's easier to look at a dashboard.
I'm watching for news on this one. :)
Hey y'all! I was going through my backlog and found this sitting around. Gonna try and get a review 😄
/assign oliviassss
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: drewgonzales360 Once this PR has been reviewed and has the lgtm label, please ask for approval from oliviassss. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
I'm gonna close this for now since I don't need it anymore. If anyone wants to pick this up, feel free to create a new branch from this.