cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[occm] loadbalancer doesn't honor the `spec.loadBalancerIP` on service update

Open kayrus opened this issue 2 years ago • 18 comments

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Documentation says:

Sometimes it's useful to use an existing available floating IP rather than creating a new one, especially in the automation scenario. In the example below, 122.112.219.229 is an available floating IP created in the OpenStack Networking service.

NOTE: If 122.112.219.229 is not available, a new floating IP will be created automatically from the configured public network. If 122.112.219.229 is already associated with another port, the Service creation will fail.

However once I create a svc with no spec.loadBalancerIP defined, OCCM assigns the random FIP to a LB. If I want to change the FIP on the existing service by setting the spec.loadBalancerIP, the setting persists, but the actual FIP is not assigned.

What you expected to happen:

I expect OCCM to assign a new FIP once spec.loadBalancerIP is changed.

How to reproduce it:

  • create a service without a FIP
  • assign a spec.loadBalancerIP to a desired value
  • FIP is not changed

Anything else we need to know?:

https://github.com/kubernetes/cloud-provider-openstack/blob/ed517e1f07057cd439542c73bff12102d8849dde/pkg/openstack/loadbalancer.go#L1039-L1041

see also #2377, where this bug is seen more clearly.

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: master
  • OpenStack version: ???
  • Others: ???

kayrus avatar Oct 19 '23 15:10 kayrus

@kayrus spec.LoadbalancerIP is deprecated, see https://github.com/kubernetes/cloud-provider-gcp/issues/371 https://github.com/kubernetes/kubernetes/pull/107235

So I think the annotation for openstack could be something like loadbalancer.openstack.org/floating-ip

all of these https://github.com/search?q=repo%3Akubernetes%2Fcloud-provider-openstack%20loadbalancerip&type=code should be removed and migrated to new way of doing things

zetaab avatar Oct 22 '23 15:10 zetaab

Yes, I agree with @zetaab, if possible, we can adopt the new way of assigning the floating IP for LB. nevertheless, one thing I would like to clarify is that "Field Service.Spec.LoadBalancerIP is deprecated, and it will not be deleted".

I have verified the feature of updating the Octavia LB with a single floating IP, as described in the documentation. and the result is FIP of this LB has been changed.

I would like to propose fixing this issue to ensure consistency with the documentation. I've noticed that the logic currently only allows newly created LBs to use the LoadBalancerIP. Perhaps we could consider a hot-fix similar to the method described below.

if (floatIP == nil && loadBalancerIP != "") || (floatIP != nil && floatIP. FloatingIP != loadBalancerIP) {
    // check & attach the FloatingIP to LB
}

yang-wang11 avatar Oct 23 '23 02:10 yang-wang11

From my perspective it would be best to switch to the annotation first. It doesn't hurt to use the existing field, but note that very soon we'll be adding dual stack LBs support. There I'll advocate to ignore spec.loadBalancerIP existence completely and only use annotations.

dulek avatar Oct 24 '23 11:10 dulek

I believe the annotation approach mirrors the existing method(the same logic). For this issue, I intend to address it ASSP. Additionally, I'm aware @kayrus makes efforts to significantly refactor for the enhancement, maybe the support for dual stack LBs via annotation can be encompassed within that PR or later.

yang-wang11 avatar Oct 26 '23 07:10 yang-wang11

seems PR #2451 is intend to fix with older way while we actually want to use loadbalancer.openstack.org/floating-ip are we good with temp fix first then go with the annotation ?

jichenjc avatar Oct 30 '23 04:10 jichenjc

@jichenjc, I've incorporated the annotation approach and adjusted the priority for compatibility purposes.

yang-wang11 avatar Oct 30 '23 07:10 yang-wang11

I notice this PR #2451 introduce a new annotation loadbalancer.openstack.org/floating-ip, I like this change. But, It seems like similar with service.beta.kubernetes.io/openstack-internal-load-balancer in semantics. IMO, If a LB has FloatingIP it is external and vice versa. So, whether or not we should deprecate service.beta.kubernetes.io/openstack-internal-load-balancer after we introduced loadbalancer.openstack.org/floating-ip. And I strongly suggest that we clarify the definition and relationship of loadbalancer.openstack.org/floating-ip and service.beta.kubernetes.io/openstack-internal-load-balancer and service.spec.loadBalancerIP firstly. It seems like a bit untidy now.

jeffyjf avatar Nov 01 '23 02:11 jeffyjf

@jeffyjf well if you look these annotations https://github.com/kubernetes/cloud-provider-openstack/blob/eeba48501bf743e3992093bd806a513ad103a347/pkg/openstack/loadbalancer.go#L69-L103 there are not many service.beta.kubernetes.io annotations. Which means that new ones will be using loadbalancer.openstack.org

zetaab avatar Nov 01 '23 06:11 zetaab

@zetaab I agree with you, I also think there are not many service.beta.kubernetes.io annotations. I just think the annotation service.beta.kubernetes.io/openstack-internal-load-balancer will become redundant after we introduce loadbalancer.openstack.org/floating-ip. Service has floating-ip means it is external and Service has no floating-ip means it is internal.

jeffyjf avatar Nov 01 '23 08:11 jeffyjf

it does not mean always that. When you create new loadbalancer and want to provision new floating ip, you do not need to define loadbalancer.openstack.org/floating-ip annotation. Only if someone wants to use existing ip loadbalancer.openstack.org/floating-ip is useful.

zetaab avatar Nov 01 '23 08:11 zetaab

Yep, what Jesse says. All LBs are external in CPO by default. We cannot change that. We could probably make setting loadbalancer.openstack.org/floating-ip=None internal, but I don't exactly see that much value in the effort. Current code doing internal LBs is super complicated already due to shared LBs.

dulek avatar Nov 08 '23 17:11 dulek

Thanks @zetaab @dulek. I got it.

Current code doing internal LBs is super complicated already due to shared LBs.

I really agree that.

jeffyjf avatar Nov 09 '23 11:11 jeffyjf

The annotation proposal from @kayrus has been revised. as follows:

loadbalancer.openstack.org/floatingip: designed for the dual stack
loadbalancer.openstack.org/floatingip-v4:   same as the spec.loadBalancerIP

from my perspective, if the intention is to utilize annotation for dual-stack support, maintaining a single, clear annotation should be considered for simplicity and to avoid confusion.

yang-wang11 avatar Nov 22 '23 08:11 yang-wang11

and how would that work if I have dualstack setup and I specify also loadbalancer.openstack.org/floatingip-v4? Which annotation it should use?

zetaab avatar Nov 22 '23 12:11 zetaab

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 20 '24 13:02 k8s-triage-robot

/remove-lifecycle stale

Still valid to me.

dulek avatar Feb 28 '24 17:02 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 28 '24 18:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 27 '24 18:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jul 27 '24 18:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 27 '24 18:07 k8s-ci-robot