cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Add Openstack compute labels to k8s nodes nodes
/kind feature
Describe the solution you'd like
When creating k8s worker nodes we would like to see a node label indicating (or representing) the underlying physical compute node so that apps can be deployed using topology spread constraints and avoid scheduling to the same physical node.
In a topology where we host multiple k8s worker instances on the same physical compute this is an important requirement for some applications.
Anything else you would like to add: .
wouldn't this work for your case => https://cluster-api.sigs.k8s.io/developer/architecture/controllers/metadata-propagation.html#machine
Thank you. Possibly, although I am missing how the label gets onto the machine. I am aware that we are running quite an old version so maybe this has already been solved, but when I check the available labels on the machine, I only see:
labels:
cluster.x-k8s.io/cluster-name: clusterABC123
machine-template-hash: "916525549"
custom.label1/cluster: clusterABC123
custom.label2/component: worker
custom.label3/node-group: md-0
The OSM has the same labels as above. Do more recent versions of CAPO add labels for the instance compute host?
The above is a capi feature introduced on capi 1.4.0 (IIRC). Basically it allows you to propagate labels/annotations. The same page explains how it works and which labels/annotations are propagated from which resources to which.
For example from here we are adding labels to .spec.template.metadata.labels which get propagated to the node.
In our case we use that to set the role of nodes
Code:
spec:
...
template:
metadata:
labels:
node-role.kubernetes.io/worker: worker
and then the nodes in the cluster:
❯ kubectl get node
NAME STATUS ROLES AGE VERSION
capi-dev-control-plane-5f8hr-64qf7 Ready control-plane 8d v1.26.4
capi-dev-control-plane-5f8hr-p25bj Ready control-plane 8d v1.26.4
capi-dev-control-plane-5f8hr-xsnmb Ready control-plane 8d v1.26.4
capi-dev-md-0-infra-nnh88-wtcw2 Ready worker 8d v1.26.4
capi-dev-md-0-infra-nnh88-wz48g Ready worker 8d v1.26.4
capi-dev-md-0-infra-nnh88-z29qt Ready worker 8d v1.26.4
These labels/annotations are updated in-place and you dont need to roll-out new machines
I think we are doing this already with some of our labels that we apply in the CAPI cluster object. It's good to know that we can also apply labels in a similar way to the MachineTemplate but I was wondering if this meta data could be gleaned by the capo-controller-manager at instance deploy time (as it creates the OSM) and have those added to the Machine labels. I think from what you are describing is, today, we would need an additional function to glean the instance metadata we want and add it to the Machine objects and then CAPI would sync it to the workload cluster nodes?
This is a valid use case from my perspective, however the OpenStack API on a tenant level only gives you hostId property in a form of e.g. d934b1bca83fc5ec7d4d6e7a525dbf75c43dfffcad22a5ee5163bb8c. Would a label with that work for you?
From my understanding of how spread topologies are working in k8s we just need a unique representation of the host so that workloads can be scheduled on different hosts. So yes, I think it would but I’m going to have another read 👍
I wonder if this would fit better in cloud-provider-openstack, which IIRC is already adding the other topology labels?
I opened https://github.com/kubernetes/cloud-provider/issues/67 to discuss adding this capability to the cloud provider.
We could also do this in CAPO, but we'd have to add a custom label to the machine after it has been created.
I wonder if this would fit better in cloud-provider-openstack, which IIRC is already adding the other topology labels?
CPO CSI has this but OCCM doesn't ..
I opened https://github.com/kubernetes/cloud-provider/issues/67 to discuss adding this capability to the cloud provider.
saw this issue, but you mentioned it's rejected before, do we have any link for it so we can check the history?
I opened kubernetes/cloud-provider#67 to discuss adding this capability to the cloud provider.
saw this issue, but you mentioned it's
rejectedbefore, do we have any link for it so we can check the history?
I can't find where I read the discussion, but from memory what was rejected was defining well-known topology labels beyond 'region' and 'zone'. IIRC the concern was that there is such a wide variety of potential topologies it would quickly pollute the set of well-known labels.
I don't believe the concept of a hypervisor topology label was rejected, and certainly there were a lot of users asking for it. There was just no desire to define a well-known label for it, hence the label would be specific to CPO.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I opened kubernetes/cloud-provider#67 to discuss adding this capability to the cloud provider.
saw this issue, but you mentioned it's
rejectedbefore, do we have any link for it so we can check the history?I can't find where I read the discussion, but from memory what was rejected was defining well-known topology labels beyond 'region' and 'zone'. IIRC the concern was that there is such a wide variety of potential topologies it would quickly pollute the set of well-known labels.
I don't believe the concept of a hypervisor topology label was rejected, and certainly there were a lot of users asking for it. There was just no desire to define a well-known label for it, hence the label would be specific to CPO.
I think, that you meant the discussion in https://github.com/kubernetes/kubernetes/issues/75274
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Can someone from here please check the related OCCM issue/PR https://github.com/kubernetes/cloud-provider-openstack/issues/2579?