[occm] Fix floating IP allocation for shared internal load balancers
What this PR does / why we need it: Fixes an issue where the OpenStack Cloud Controller Manager incorrectly attempts to allocate floating IPs when sharing internal load balancers between services. This causes spurious warnings and potential failures when using the load balancer sharing feature with internal services.
What happened:
When sharing an internal load balancer (marked with service.beta.kubernetes.io/openstack-internal-load-balancer: "true"), the controller would still try to attach a floating IP to the shared load balancer, even though internal load balancers should not have floating IPs by design.
What this PR does:
- Adds a check in
ensureFloatingIPto skip floating IP operations for shared internal load balancers - Returns the VIP address directly for internal load balancers when the service is not the load balancer owner
- Prevents unnecessary floating IP allocation attempts while preserving existing behavior for all other scenarios
Which issue this PR fixes(if applicable): fixes #2891
Special notes for reviewers:
This change only affects the code path for services that are both internal (svcConf.internal = true) and not the load balancer owner (isLBOwner = false). All existing behavior for external load balancers and load balancer owners remains unchanged.
Release note:
NONE
Welcome @skartikey!
It looks like this is your first PR to kubernetes/cloud-provider-openstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes/cloud-provider-openstack has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @skartikey. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign anguslees for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/ok-to-test
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Has there been any update on this?
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten