cloud-provider
cloud-provider copied to clipboard
cloud-controller-manager should be able to ignore nodes
Continuing the discussion from https://github.com/kubernetes/kubernetes/pull/73171, the CCM should have a mechanism to "ignore" a node in a cluster, either because it doesn't belong to a cloud provider or is not a node in the traditional sense (e.g. virtual kubelet). See the PR for more discussion
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
@andrewsykim There are some scenarios that ccm should ignore nodes. eg. virtual-kubelet , edge node, datacenter nodes in hybrid cluster .
we should come up with more generous way to ignore those nodes.
alibaba cloudprovider use service.beta.kubernetes.io/exclude-node in node labels to exclude node from ccm.
Any thoughts?
@timoreimann I recall having a conversation about supporting multiple CCMs in a cluster, this is somewhat related. Are you interested in doing this work?
@andrewsykim yes, I'm very interested as it'd help us at DigitalOcean to ease testing. Though my intent is to go beyond just nodes and include load balancers as well. kubernetes/kubernetes#88820 is the ticket I filed for the wider purpose, and https://github.com/kubernetes/kubernetes/issues/88820#issuecomment-607474995 has the summary of our discussion in one of the SIG meetings.
Feel free to assign me to either / all tickets.
hi everybody. I'm also looking at the way how to ignore some nodes on AWS. May I ask, do you know the solution to that?
AFAIK there's still none!
Bumping this up as it hasn't seen any love in a while. This is super useful to my company, as we would like to be able to operate hybrid clusters (openstack and bare-metal in our case) while still being able to use cloud-controller-manager.
I'd be happy to contribute to this effort, just don't know where to start. A KEP, perhaps?
It comes down to how to identify when a node is owned by which CCM. AWS has some notion that nodes should be prefixed either with ip- or i-, but that is a poor heuristic.
It may be that a KEP is needed to introduce a flag to kubelet that will add something to the created node object hinting at what CCM should own it, and that all CCMs then implement support for ignoring the hint if set to another value than itself.
Not too unrelated to this is the ability to run multiple AWS CCMs for having nodes in multiple regions or accounts.
That's a good point, it does mesh really nicely with allowing multiple CCMs (AWS or otherwise) to manage a single cluster.
I was more approaching the idea of having an annotation on a node that indicates which CCM it should belong to, but we'd need a reproducible(?) way to identify CCMs... could be done as a simple argument to the CCM, or...?
Sounds like something similar to LoadBalancerClass and IngressClass.
Yeah, feels very similar. I like that parallel a lot.
Hi all, any update on this issue?
How are people doing multi-cloud kubernetes clusters without this solved?
It comes down to how to identify when a node is owned by which CCM. AWS has some notion that nodes should be prefixed either with
ip-ori-, but that is a poor heuristic.
Why attempt to do it based on node name? Instead do it based on label or annotation:
- Nominate a new label to used on nodes, e.g.
node.kubernetes.io/cloud-provider: aws - This label should be added by users via extra arguments to
kubelet - A CCM should never initialize or delete a node with a label that doesn't match its own
--cloud-providerargument
This label should be added by users via extra arguments to kubelet
I don't think the underlying machine should be trusted to set this correctly for the same reason other k8s-namespaced labels are not allowed. It should be done by the provisioning/installer mechanism that handles things like the role labels.
A CCM should never initialize or delete a node with a label that doesn't match its own --cloud-provider argument
If one would want multi-region AWS, one would need multiple AWS CCMs, so this doesn't quite work. But some similar flag certainly.
How i solved this issue:
- Use Talos as kubernetes solution
- Talos CCM only initializes the nodes and sets the ProviderID string.
- Native CCM (from cloud provider) launch only as
--controllers=cloud-node-lifecycle
I did not try to use routing/loadbalancing thought kubernetes resources. And I think, it will be very complicated.
Interesting idea @sergelogvinov; I'm already using talos so trying to figure out how that would work.
Native CCM (from cloud provider) launch only as
--controllers=cloud-node-lifecycle
Looking at least the aws v2 code InstanceExists returns false for any non-aws nodes: https://github.com/kubernetes/cloud-provider-aws/blob/10ec1f461d50e7413fa8c97baefd8db24c1f9d8a/pkg/providers/v2/instances.go#L77
But in the cloud-node-lifecycle controller, doesn't it proceed to delete the node as soon as InstanceExists returns false?
- https://github.com/kubernetes/cloud-provider/blob/97fdc45fcc88e1391b130e712e3c0295bbf9b870/controllers/nodelifecycle/node_lifecycle_controller.go#L222
- called by https://github.com/kubernetes/cloud-provider/blob/97fdc45fcc88e1391b130e712e3c0295bbf9b870/controllers/nodelifecycle/node_lifecycle_controller.go#L155
- which would proceed to delete the node at https://github.com/kubernetes/cloud-provider/blob/97fdc45fcc88e1391b130e712e3c0295bbf9b870/controllers/nodelifecycle/node_lifecycle_controller.go#L176
Do you actually have this working?
I did not try AWS, this is in my nearest plan-list. Unfortunately, sometimes you need to add a few if/else lines to native CCM. to save a time you can check it out https://github.com/sergelogvinov/terraform-talos (this is my research)
Nevermind the v2 code in AWS CCM. That one is on ice and probably should be removed. But I doubt v1 is any better. But I am happy to support changes in this direction.
However, the more generic support (the mentioned flag and logic for whether the CCM interface is being interacted with) should be added to this repos. If we are lucky, it might be that all CCMs using this lib doesn't need any changes then.
Nevermind the v2 code in AWS CCM. That one is on ice and probably should be removed.
oh? I didn't realise it wasn't ready for use. Could you share some info on that? Are you speaking of v2 code in general? or AWS in particular?
But I doubt v1 is any better.
Indeed. if the instance is not found for the current cloud provider then ensureNodeExistsByProviderID likewise returns false and the same node deletion should happen (with my understanding/reading).
https://github.com/kubernetes/cloud-provider/blob/97fdc45fcc88e1391b130e712e3c0295bbf9b870/controllers/nodelifecycle/node_lifecycle_controller.go#L235-L236
However, the more generic support (the mentioned flag and logic for whether the CCM interface is being interacted with) should be added to this repos. If we are lucky, it might be that all CCMs using this lib doesn't need any changes then.
The logic of ensureNodeExistsByProviderID returning false leading to node deletion seems to be an issue that must be fixed in the cloud node lifecycle controller.
Nevermind the v2 code in AWS CCM. That one is on ice and probably should be removed.
oh? I didn't realise it wasn't ready for use. Could you share some info on that? Are you speaking of v2 code in general? or AWS in particular?
v2 was an idea to make CCM more modern using CRDs for configuration and such. But as you can see from the git history, pretty much nothing has happened to it, while v1 is more actively maintained.
AWS CCM should absolutely be used in favour of the in-tree provider. kOps has been using by default since 1.24.
But I doubt v1 is any better.
Indeed. if the instance is not found for the current cloud provider then
ensureNodeExistsByProviderIDlikewise returnsfalseand the same node deletion should happen (with my understanding/reading).https://github.com/kubernetes/cloud-provider/blob/97fdc45fcc88e1391b130e712e3c0295bbf9b870/controllers/nodelifecycle/node_lifecycle_controller.go#L235-L236
However, the more generic support (the mentioned flag and logic for whether the CCM interface is being interacted with) should be added to this repos. If we are lucky, it might be that all CCMs using this lib doesn't need any changes then.
The logic of
ensureNodeExistsByProviderIDreturning false leading to node deletion seems to be an issue that must be fixed in the cloud node lifecycle controller.
I am thinking this should not be called if the node has a different label/class than what's passed in the flag.