Loadbalancer not created in its specified subnet
/kind bug
What steps did you take and what happened: I am trying to create a cluster where the loadbalancer is in a different subnet from the cluster nodes. The OpenstackCluster CR is defined as follows:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackCluster
metadata:
name: capi-test
namespace: default
spec:
apiServerLoadBalancer:
enabled: true
network:
id: <MY-NETWORK-ID>
subnets:
- id: <LB-SUBNET-ID>
identityRef:
cloudName: openstack
name: capi-test-cloud-config
network:
id: <MY-NETWORK-ID>
subnets:
- id: <NODES-SUBNET-ID>
disableExternalNetwork: true
disableAPIServerFloatingIP: true
What did you expect to happen: I was expecting the loadbalancer to be created within <LB-SUBNET-ID> but instead it was created in <NODES-SUBNET-ID>.
Anything else you would like to add: I think the reason behind this is that the function responsible for creating the loadbalancer is getting the subnet-id from the state and the state in my case is set after the loadbalancer and its subresources (listners, pools ..etc) are created.
Environment:
- Cluster API Provider OpenStack version (Or
git rev-parse HEADif manually built): v0.10.4 - Cluster-API version: v1.7.4
- OpenStack version: 6.0.1
- Minikube/KIND version:
- Kubernetes version (use
kubectl version): v1.27.3 for client and v1.29.5 for server - OS (e.g. from
/etc/os-release): redhat9
I think that would be a new feature to support that use case. Right now a cluster manages one network and one subnet for both the machines and the LB. I take a deeper look a bit later this week and report back any finding.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I am currently experiencing the same error.
@EmilienM I dont think this would be a new feature, the docs say that the LB will be created in the LB Network if its given and the code says that too, only the implementation has some things that are not working right now.
/remove-lifecycle rotten