cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Name prefix for OpenStackMachine related resources
/kind feature
Describe the solution you'd like
- As a security/network operator, I want CAPO resources to be easily identifiable by their name and hostname, so that I can keep track of and verify security policies between environments.
- As a cluster admin, I want control over how CAPO resources are named, so that they can fit in with external resources living in the same OpenStack project.
To address these user stories I propose a new field for the OpenStackMachine (and consequently for the OpenStackMachineTemplate): .spec.namePrefix. It would be used as a prefix for all OpenStack resources created by CAPO for this OpenStackMachine.
The field would be optional. If not specified, the current behavior would remain, where resources gets their name based on the OpenStackMachine name.
Why not just name the OpenStackMachine differently?
- It is created and named by CAPI, so we do not have direct control over it.
- It is just implicit that we name the resources after it, so it cannot really be relied upon (unless we make it explicit).
- The OpenStackMachine gets its name from the MachineDeployment or (Kubeadm)ControlPlane. These must be replaced in order to change the naming scheme (and it is anyway not part of the contract). For the KCP that means recreating the cluster.
Why not address this at the CAPI level?
- There is no contract that binds CAPO to name the resources in a certain way, so it does not help to get control over the naming of the OpenStackMachine from CAPI.
- There is an issue (https://github.com/kubernetes-sigs/cluster-api/issues/7030) that may address this. However, making it part of the contract will affect all implementers and probably be very hard to get consensus on. Not to mention that each provider has their own limitations and rules from their infrastructure that may affect what is possible to do.
Anything else you would like to add:
This feature is something that we didn't notice that we needed until the breaking change in CAPI v1.7.0. We had gotten used to the old (and admittedly confusing) behavior where infra machines were named by the infra templates. CAPO (and probably other providers as well) named the resources according to the infra machine and it was then very easy to change the names by switching infra template. Now we are stuck with fixed names based on the KCP and MDs, which cannot be changed. For new clusters, we can of course set the proper names where it matters, but that doesn't address the core issue here: we need control over the naming of the resources. With this feature request, I'm trying to address that so that we can have explicit control over it.
.spec.namePrefix. It would be used as a prefix for all OpenStack resources created by CAPO for this OpenStackMachine.
I knew sometimes we use tag, some times use description not sure which one is better but I think it worthy a discussion to dintinguish resource belong to different cluster?
Yes tags can help to identify resources, but they do not help with compliance requirements for hostnames and resource names
For the record I'm actively advocating for an alternate solution here: https://github.com/kubernetes-sigs/cluster-api/issues/10463
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.