cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
How to set cluster name in Openstack Cloud Controller Manager?
I have been running into this issue: https://github.com/kubernetes/cloud-provider-openstack/issues/2241, and this comment on the issue mentions that the cluster name needs to be unique in order for LoadBalancer names to not collide.
I am currently installing the OCCM via ClusterResourceSet, like this:
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata:
name: openstack-ccm
namespace: capi
spec:
strategy: Reconcile
clusterSelector:
matchLabels:
ccm: openstack
resources:
- name: openstack-ccm-manifests
kind: ConfigMap
where the configmap contains all of the manifests here.
The default manifests have the "kubernetes" cluster name:
- name: CLUSTER_NAME
value: kubernetes
How do I pass the actual cluster name to these manifests? There does not appear to be a mechanism like variable substitution or templating available in this context. Or, is there a better way to install the OCCM?
I think perhaps this issue belongs to cloud-provider-openstack and not cluster-api-provider-openstack? Anyway, if you want to set a custom cluster name through CAPI, that could be done using the ClusterConfiguration of the KubeadmControlPlane: https://pkg.go.dev/sigs.k8s.io/[email protected]/bootstrap/kubeadm/api/v1beta1#ClusterConfiguration For OCCM I think you need to patch those manifests. I would do it by creating a kustomization that gathers the manifests you linked to and then provide a patch that changes the cluster name.
I agree, this is not a CAPO issue. We use the Helm charts that we template out and you can set the cluster name there too.
Is it not possible then to do this with a single ClusterResourceSet that all clusters can use? I'd prefer not to have to duplicate it for every cluster I want to create, since I wouldn't be able to create and delete clusters just by creating or deleting a single Cluster resource.
I don't think it is possible, no. Perhaps it is possible to create a ClusterClass or similar template that automatically creates a ClusterResourceSet per Cluster with the correct name? That could be something to ask the CAPI community about. If it is not possible they may be open to a feature request
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Better to use CAAPH:
apiVersion: addons.cluster.x-k8s.io/v1alpha1
kind: HelmChartProxy
metadata:
name: openstack-ccm
namespace: <namespace>
spec:
clusterSelector:
matchLabels:
clusterName: <cluster_name_in_capi>
repoURL: https://kubernetes.github.io/cloud-provider-openstack
chartName: openstack-cloud-controller-manager
version: 2.30.0
options:
waitForJobs: true
wait: true
timeout: 5m
install:
createNamespace: false
valuesTemplate: |
cluster:
name: <cluster_name>
cloudConfig:
global:
auth-url: <redacted>
username: <redacted>
password: <redacted>
tenant-name: <redacted>
domain-name: <redacted>
region: <redacted>
networking:
loadBalancer:
floating-network-id: <someid>
secret:
enabled: true
create: true
name: cloud-config
nodeSelector:
node-role.kubernetes.io/control-plane: "true"
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@kralicky Based on the latest comments, would you need any further clarification on this? If not, please close the issue.
The helm chart approach seems like a good way to handle this. 👍