cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[occm] VIP of the LB can be specified during creation
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: In Octavia CLI, https://docs.openstack.org/python-octaviaclient/latest/cli/index.html#loadbalancer, --vip-address can be used to specify the VIP of the LB during creation. But in K8s with openstack, https://github.com/kubernetes/cloud-provider-openstack/blob/8a156e543ca44924a5f26aaf001fb86bcbd100f9/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md, we can only set the network-id, subnet-id of the LB, but VIP can't be specified.
What you expected to happen: New annotation to specified the VIP of the LB.
How to reproduce it: N/A
Anything else we need to know?: We have some service running not on K8s cluster but also within the same Private Network would connect to the LB's VIP, so that's why we hope we can fix the VIP of the LB.
@syy6 FIP cannot be set globally unlike the network-id and subnet-id. FIP should be set with a service spec: Spec.LoadBalancerIP
. However this is being changing in the #2451 PR with a new loadbalancer.openstack.org/floating-ip
annotation label.
From my perspective, the term "VIP" (Virtual IP) in this context refers to a private IP address. PR #2451 was specifically tailored for public IP addresses, commonly known as floating IPs. Therefore, it falls outside the purview of PR #2451.
@kayrus
I skimmed the code, and I found the annotation loadbalancer.openstack.org/load-balancer-address
is just a result and cannot be assigned(it will not work), this annotation can probably be reused(like loadbalancer.openstack.org/network-id
and loadbalancer.openstack.org/subnet-id
) for the private LB case, or maybe introduce a new annotation
@yang-wang11 indeed, I mixed up FIP and VIP. For VIP you have to use loadbalancer.openstack.org/port-id
, which, unfortunately, is not in an IP format.
You can also try playing with internal services: https://github.com/kubernetes/cloud-provider-openstack/blob/9e7794b9b2523bc2aef6393ba14fee35266f586f/pkg/openstack/loadbalancer.go#L260-L264
Thanks @dulek , I just tested and it works!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I'm guessing we can close this.