move roles/kubernetes/control-plane/vars/main.yaml to roles/kubernetes/control-plane/defaults/main/main.yml`
What would you like to be added: remove:
12:34 $ cat roles/kubernetes/control-plane/vars/main.yaml
---
# list of admission plugins that needs to be configured
kube_apiserver_admission_plugins_needs_configuration: [EventRateLimit, PodSecurity]
and add it as a variable in inventory/ops_k1/group_vars/k8s_cluster/k8s-cluster.yml
Why is this needed:
Those are the defaults but if I want to add another plugin that needs configuration to inventory/ops_k1/group_vars/k8s_cluster/k8s-cluster.yml it won't work, because the variables defined in main.yaml will take preference over what is written in the inventory. So the only way I have is to run kubespray like this, from now on:
ansible-playbook -vv -b -i inventory/ops_k1/inventory.ini upgrade-cluster.yml --tags download,master -e "kube_apiserver_admission_plugins_needs_configuration=EventRateLimit,PodSecurity,PodNodeSelector"
Can we delete those values and add them to the defaults ? roles/kubernetes/control-plane/defaults/main/main.yml
I don't immediately see anything wrong with that, not at first glance anyway. PR welcome :+1:
The problem I see is that, due to the order in which the variables are evaluated, if I want to modify this one, it is not enough to have it in the file inventory/ops_k1/group_vars/k8s_cluster/k8s-cluster.yml, because it will take precendence the one from the role. So the only way of modifying it is as -e "kube_apiserver_admission_plugins_needs_configuration=EventRateLimit,PodSecurity,PodNodeSelector" when running ansible
I mean that I don't see anything wrong with moving these to defaults instead of vars (with a cursory look), and that you're welcome to send a PR doing so.
done
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.