autoscaler
autoscaler copied to clipboard
Use environment varibles in the Cluster Autoscaler
Which component are you using?: cluster-autoscaler
What version of the component are you using?: Cluster Autoscaler: 1.21.1 Helm Chart Version: 9.13.1
Component version: v1.21.14-eks-6d3986b
kubectl version
Output
kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12", GitCommit:"696a9fdd2a58340e61e0d815c5769d266fca0802", GitTreeState:"clean", BuildDate:"2022-04-13T19:07:00Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-6d3986b", GitCommit:"8877a3e28d597e1184c15e4b5d543d5dc36b083b", GitTreeState:"clean", BuildDate:"2022-07-20T22:05:32Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
What environment is this in?: EKS - AWS
What did you expect to happen?: Use Environment Variables
What happened instead?: Using environment variables doesn't work
How to reproduce it (as minimally and precisely as possible): Try to change environments variables: --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/$EKS_CLUSTER_NAME
this can't be a bug of the autoscaler from my perspective.. For me it works! We inject the configmap into the environment like this:
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.23.1
name: cluster-autoscaler
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=$(IAC_CAS_LOG_LEVEL)
- --cloud-provider=$(IAC_CAS_CLOUD_PROVIDER)
- --skip-nodes-with-local-storage=$(IAC_CAS_SKIP_NODES_WITH_LOCAL_STORAGE)
- --expander=$(IAC_CAS_EXPANDER)
- --balance-similar-node-groups=$(IAC_CAS_BALANCE_SIMILAR_NODE_GROUPS)
- --scale-down-enabled=$(IAC_CAS_SCALE_DOWN_ENABLED)
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/$(IAC_CLUSTER_NAME)
- --scan-interval=$(IAC_CAS_SCAN_INTERVAL)
- --skip-nodes-with-system-pods=$(IAC_CAS_SKIP_NODES_WITH_SYSTEM_PODS)
envFrom:
- configMapRef:
name: iac-cluster-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.