ingress-nginx
ingress-nginx copied to clipboard
Add nodeSelector for admission jobs to Helm chart flags
The Helm chart provides the flag controller.nodeSelector for the controller pod which allows to assign it to nodes with a specific label.
It would be nice to have a nodeSelector flag for the admission jobs (create/patch) too, e.g.:
controller.admissionWebhooks.patch.nodeSelector.
/good-first-issue /help-wanted /triage accepted /priority backlog
@strongjz: This request has been marked as suitable for new contributors.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/good-first-issue /help-wanted /triage accepted /priority backlog
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @strongjz I am interested in working on this however would like to get some clarity on what I am supposed to do
There is a controller.admissionWebhooks.patch.nodeSelector field in the values file , should a similar controller.admissionWebhooks.create.nodeSelector be created
https://github.com/kubernetes/ingress-nginx/blob/ad47d49216b9460c299b267a69b710659044b863/charts/ingress-nginx/values.yaml#L642
If yes then should they be used in the template files similar to controller.nodeSelector
https://github.com/kubernetes/ingress-nginx/blob/ad47d49216b9460c299b267a69b710659044b863/charts/ingress-nginx/templates/controller-daemonset.yaml#L193
The documentation shows only the field controller.admissionWebhooks.patch.nodeSelector."kubernetes.io/os" which is set to linux by default. I'd like to have the field controller.admissionWebhooks.patch.nodeSelector to be set to a custom node label. And yes, controller.admissionWebhooks.create.nodeSelector should also be there.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign
@bm-skutzke
- Why do you want to change this field's
controller.admissionWebhooks.patch.nodeSelector."kubernetes.io/oscurrent value ? - Why does the above e mentioned field's current value need to be changed from
linuxto something else ? - What is the problem created by the current value "linux" ?
- Why do you need the spec fields you suggested, in the admissionwebhook create/patch jobs ?
- What problem is being created by those fields missing in the create/patch jobs ?
/remove-triage accepted
@bm-skutzke: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.