kube-aws-autoscaler
kube-aws-autoscaler copied to clipboard
autoscaler always tries to manage the Master ASG
I deployed the auto scaler without --include-master-nodes
, but it still tries to manage the master ASG
2017-06-05 03:24:17,137 WARNING: Desired capacity for ASG ASGMaster is 2, but exceeds max 1
Deployment used:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
application: kube-aws-autoscaler
version: v0.9
name: kube-aws-autoscaler
spec:
replicas: 1
selector:
matchLabels:
application: kube-aws-autoscaler
template:
metadata:
labels:
application: kube-aws-autoscaler
version: v0.9
annotations:
# FIXME: using hardcoded IAM role
iam.amazonaws.com/role: auto-scaler-role
spec:
containers:
- name: autoscaler
image: hjacobs/kube-aws-autoscaler:0.9
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
Hi @magnusboman, the autoscaler currently relies on the master
label having the value true
. See https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/userdata-master.yaml#L97 on how we configure this for Zalando (I think kube-aws
does the same IIRC).
We should document this :smirk:
Anyway, even without the master
node label, the autoscaler won't do anything for your master ASG as it respects the min/max settings.
You could additionally check the EC2 tag 'k8s.io/role/master' (set to 1 for masters) for clusters created by kops.
@kaazoo yes, I also realized that the way of marking/labeling nodes is not consistent right now across kube-aws, kops and kubeadm. Supporting the "standard" role label makes sense.