cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[occm] The nodeSelector key is not empty in the helm chart in release v1.29.0

Open babykart opened this issue 1 year ago • 8 comments

**This is a BUG REPORT **:

/kind bug

What happened: Since the release v1.29.0, the nodeSelector key in the file values.yaml of the helm chart is no longer empty and therefore requires positioning the following label node-role.kubernetes.io/controlplane: "" on the workers where the daemonSet must be deployed.

What you expected to happen: The nodeSelector key in the file values.yaml of the helm chart should be empty.

nodeSelector: {}

How to reproduce it: Install occm v1.29.0 with the helm chart.

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: v1.29.0

Let me know if you need a PR. Regards.

babykart avatar Feb 13 '24 10:02 babykart

This got introduced in #2346. @wwentland, would you care to comment here?

As a workaround, would just setting nodeSelector: {} in your values.yaml help here? Why would you think that no nodeSelector is a preference?

dulek avatar Feb 29 '24 12:02 dulek

@dulek The workaround doesn't help if I need to add my own nodeSelector. The fact that there is already a default value, helm will 'add' the two values and therefore in this case I will be forced to set the default value to the nodes as well besides mine.

For example, I want to use my.corp.selector/front: "true", the nodeSelector generated by helm would be:

<...>
    spec:
      nodeSelector:
        my.corp.selector/front: "true"
        node-role.kubernetes.io/control-plane: ""
<...>

babykart avatar Mar 01 '24 15:03 babykart

@babykart:

I was able to do what you need with this values.yaml:

nodeSelector:
  node-role.kubernetes.io/control-plane: null
  my.corp.selector/front: "true"

This gets my helm install --dry-run to render this:

    spec:
      nodeSelector:
        my.corp.selector/front: "true"

Based on Helm Docs.

Can I close this issue now?

dulek avatar Mar 18 '24 18:03 dulek

@dulek thx. I didn't know about this feature of helm. But should we therefore consider that this is the new behavior from version 1.29?

babykart avatar Mar 18 '24 19:03 babykart

I think so. I believe @wwentland motivation to change this was to follow what AWS provider does and it makes sense to me: https://github.com/kubernetes/cloud-provider-aws/blob/master/charts/aws-cloud-controller-manager/values.yaml#L14-L16. Your use case is still valid, but since we've figured out how to override the default, I think we should keep the current 1.29 behavior.

dulek avatar Mar 19 '24 08:03 dulek

I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments. If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?

babykart avatar Mar 19 '24 19:03 babykart

I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments.

But in the end AWS and OpenStack K8s clusters shouldn't be too different. The idea is that cloud-provider-openstack being part of the control plane, lands on the control plane nodes. This should basically be true for any platform and any cloud-provider.

If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?

Sure thing, docs are always welcome! Can you prepare the PR? I'll be happy to review and approve it.

dulek avatar Mar 26 '24 16:03 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 24 '24 16:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 24 '24 17:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 23 '24 18:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 23 '24 18:08 k8s-ci-robot