cloud-provider-openstack
                                
                                 cloud-provider-openstack copied to clipboard
                                
                                    cloud-provider-openstack copied to clipboard
                            
                            
                            
                        [occm] loadbalancer doesn't honor the `spec.loadBalancerIP` on service update
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Documentation says:
Sometimes it's useful to use an existing available floating IP rather than creating a new one, especially in the automation scenario. In the example below, 122.112.219.229 is an available floating IP created in the OpenStack Networking service.
NOTE: If 122.112.219.229 is not available, a new floating IP will be created automatically from the configured public network. If 122.112.219.229 is already associated with another port, the Service creation will fail.
However once I create a svc with no spec.loadBalancerIP defined, OCCM assigns the random FIP to a LB. If I want to change the FIP on the existing service by setting the spec.loadBalancerIP, the setting persists, but the actual FIP is not assigned.
What you expected to happen:
I expect OCCM to assign a new FIP once spec.loadBalancerIP is changed.
How to reproduce it:
- create a service without a FIP
- assign a  spec.loadBalancerIPto a desired value
- FIP is not changed
Anything else we need to know?:
https://github.com/kubernetes/cloud-provider-openstack/blob/ed517e1f07057cd439542c73bff12102d8849dde/pkg/openstack/loadbalancer.go#L1039-L1041
see also #2377, where this bug is seen more clearly.
Environment:
- openstack-cloud-controller-manager(or other related binary) version: master
- OpenStack version: ???
- Others: ???
@kayrus spec.LoadbalancerIP is deprecated, see https://github.com/kubernetes/cloud-provider-gcp/issues/371 https://github.com/kubernetes/kubernetes/pull/107235
So I think the annotation for openstack could be something like loadbalancer.openstack.org/floating-ip
all of these https://github.com/search?q=repo%3Akubernetes%2Fcloud-provider-openstack%20loadbalancerip&type=code should be removed and migrated to new way of doing things
Yes, I agree with @zetaab, if possible, we can adopt the new way of assigning the floating IP for LB. nevertheless, one thing I would like to clarify is that "Field Service.Spec.LoadBalancerIP is deprecated, and it will not be deleted".
I have verified the feature of updating the Octavia LB with a single floating IP, as described in the documentation. and the result is FIP of this LB has been changed.
I would like to propose fixing this issue to ensure consistency with the documentation. I've noticed that the logic currently only allows newly created LBs to use the LoadBalancerIP. Perhaps we could consider a hot-fix similar to the method described below.
if (floatIP == nil && loadBalancerIP != "") || (floatIP != nil && floatIP. FloatingIP != loadBalancerIP) {
    // check & attach the FloatingIP to LB
}
From my perspective it would be best to switch to the annotation first. It doesn't hurt to use the existing field, but note that very soon we'll be adding dual stack LBs support. There I'll advocate to ignore spec.loadBalancerIP existence completely and only use annotations.
I believe the annotation approach mirrors the existing method(the same logic). For this issue, I intend to address it ASSP. Additionally, I'm aware @kayrus makes efforts to significantly refactor for the enhancement, maybe the support for dual stack LBs via annotation can be encompassed within that PR or later.
seems PR #2451 is intend to fix with older way while we actually want to use loadbalancer.openstack.org/floating-ip are we good with temp fix first then go with the annotation ?
@jichenjc, I've incorporated the annotation approach and adjusted the priority for compatibility purposes.
I notice this PR #2451 introduce a new annotation loadbalancer.openstack.org/floating-ip, I like this change. But, It seems like similar with service.beta.kubernetes.io/openstack-internal-load-balancer in semantics. IMO, If a LB has FloatingIP it is external and vice versa. So, whether or not we should deprecate service.beta.kubernetes.io/openstack-internal-load-balancer after we introduced loadbalancer.openstack.org/floating-ip.  And I strongly suggest that we clarify the definition and relationship of loadbalancer.openstack.org/floating-ip and service.beta.kubernetes.io/openstack-internal-load-balancer and service.spec.loadBalancerIP firstly. It seems like a bit untidy now.
@jeffyjf well if you look these annotations https://github.com/kubernetes/cloud-provider-openstack/blob/eeba48501bf743e3992093bd806a513ad103a347/pkg/openstack/loadbalancer.go#L69-L103 there are not many service.beta.kubernetes.io annotations. Which means that new ones will be using loadbalancer.openstack.org
@zetaab I agree with you, I also think there are not many service.beta.kubernetes.io annotations.  I just think the annotation service.beta.kubernetes.io/openstack-internal-load-balancer will become redundant after we introduce loadbalancer.openstack.org/floating-ip. Service has floating-ip means it is external and Service has no floating-ip means it is internal.
it does not mean always that. When you create new loadbalancer and want to provision new floating ip, you do not need to define loadbalancer.openstack.org/floating-ip annotation. Only if someone wants to use existing ip loadbalancer.openstack.org/floating-ip is useful.
Yep, what Jesse says. All LBs are external in CPO by default. We cannot change that. We could probably make setting loadbalancer.openstack.org/floating-ip=None internal, but I don't exactly see that much value in the effort. Current code doing internal LBs is super complicated already due to shared LBs.
Thanks @zetaab @dulek. I got it.
Current code doing internal LBs is super complicated already due to shared LBs.
I really agree that.
The annotation proposal from @kayrus has been revised. as follows:
loadbalancer.openstack.org/floatingip: designed for the dual stack
loadbalancer.openstack.org/floatingip-v4:   same as the spec.loadBalancerIP
from my perspective, if the intention is to utilize annotation for dual-stack support, maintaining a single, clear annotation should be considered for simplicity and to avoid confusion.
and how would that work if I have dualstack setup and I specify also loadbalancer.openstack.org/floatingip-v4? Which annotation it should use?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle stale
- Close this issue with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Still valid to me.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle stale
- Close this issue with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle rotten
- Close this issue with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with /reopen
- Mark this issue as fresh with /remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.