cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[occm] The nodeSelector key is not empty in the helm chart in release v1.29.0
**This is a BUG REPORT **:
/kind bug
What happened:
Since the release v1.29.0, the nodeSelector key in the file values.yaml of the helm chart is no longer empty and therefore requires positioning the following label node-role.kubernetes.io/controlplane: "" on the workers where the daemonSet must be deployed.
What you expected to happen: The nodeSelector key in the file values.yaml of the helm chart should be empty.
nodeSelector: {}
How to reproduce it: Install occm v1.29.0 with the helm chart.
Environment:
- openstack-cloud-controller-manager(or other related binary) version: v1.29.0
Let me know if you need a PR. Regards.
This got introduced in #2346. @wwentland, would you care to comment here?
As a workaround, would just setting nodeSelector: {} in your values.yaml help here? Why would you think that no nodeSelector is a preference?
@dulek The workaround doesn't help if I need to add my own nodeSelector.
The fact that there is already a default value, helm will 'add' the two values and therefore in this case I will be forced to set the default value to the nodes as well besides mine.
For example, I want to use my.corp.selector/front: "true", the nodeSelector generated by helm would be:
<...>
spec:
nodeSelector:
my.corp.selector/front: "true"
node-role.kubernetes.io/control-plane: ""
<...>
@babykart:
I was able to do what you need with this values.yaml:
nodeSelector:
node-role.kubernetes.io/control-plane: null
my.corp.selector/front: "true"
This gets my helm install --dry-run to render this:
spec:
nodeSelector:
my.corp.selector/front: "true"
Based on Helm Docs.
Can I close this issue now?
@dulek thx. I didn't know about this feature of helm. But should we therefore consider that this is the new behavior from version 1.29?
I think so. I believe @wwentland motivation to change this was to follow what AWS provider does and it makes sense to me: https://github.com/kubernetes/cloud-provider-aws/blob/master/charts/aws-cloud-controller-manager/values.yaml#L14-L16. Your use case is still valid, but since we've figured out how to override the default, I think we should keep the current 1.29 behavior.
I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments. If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?
I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments.
But in the end AWS and OpenStack K8s clusters shouldn't be too different. The idea is that cloud-provider-openstack being part of the control plane, lands on the control plane nodes. This should basically be true for any platform and any cloud-provider.
If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?
Sure thing, docs are always welcome! Can you prepare the PR? I'll be happy to review and approve it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.