amazon-vpc-cni-k8s icon indicating copy to clipboard operation
amazon-vpc-cni-k8s copied to clipboard

EKS 1.16 / v1.6.x: "couldn't get current server API group list; will keep using cached value"

Open max-rocket-internet opened this issue 4 years ago • 66 comments

We see the aws-node pods crash on startup sometimes with this logged:

Starting IPAM daemon in the background ... ok.
ERROR: logging before flag.Parse: E0708 16:29:03.884330       6 memcache.go:138] couldn't get current server API group list; will keep using cached value. (Get https://172.20.0.1:443/api?timeout=32s: dial tcp 172.20.0.1:443: i/o timeout)
Checking for IPAM connectivity ...  failed.
Timed out waiting for IPAM daemon to start:

After starting and crashing the pod is then restarted and runs fine. About half of the aws-node pods do this.

max-rocket-internet avatar Jul 09 '20 10:07 max-rocket-internet

There's quite a few similar issues but they aren't exactly the same error. Not sure if this is a duplicate issue or not...

https://github.com/aws/amazon-vpc-cni-k8s/issues/1049#issuecomment-649983328 (logs don't match) https://github.com/aws/amazon-vpc-cni-k8s/issues/1055 (perpetual problem) https://github.com/aws/amazon-vpc-cni-k8s/issues/1054 (different Reason for last state of pod)

max-rocket-internet avatar Jul 10 '20 13:07 max-rocket-internet

Same for v1.6.3 😐

max-rocket-internet avatar Jul 15 '20 14:07 max-rocket-internet

@max-rocket-internet Thanks for testing with v1.6.3 as well. I suspect the issue here is that kube-proxy have not yet set up the iptables rules to resolve 172.20.0.1, and the old client code we still use (See #522 for details) doesn't handle this well.

I guess a work-around to avoid the restarts would be to retry a few times before returning an error, instead of letting kubelet do the retries. That would at least hide these errors from the logs.

mogren avatar Jul 15 '20 18:07 mogren

Makes sense but why wasn't this a problem with v1.5.x releases? Or is it related to moving to EKS/AMI 1.16?

max-rocket-internet avatar Jul 17 '20 09:07 max-rocket-internet

we had the same issue after upgrading from eks 1.15 to 1.16. We were just bumping the image version inside DaemonSet to 1.6.X. What solved our issue is to apply the full yaml provided by aws doc: https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.6/config/v1.6/aws-k8s-cni.yaml

it did changes both to the DaemonSet and the ClusterRole.

Good luck !

hristov-hs avatar Jul 24 '20 09:07 hristov-hs

But there's no changes to that file between the v1.6.2 and v1.6.3 releases? https://github.com/aws/amazon-vpc-cni-k8s/commits/master/config/v1.6/aws-k8s-cni.yaml

max-rocket-internet avatar Jul 29 '20 15:07 max-rocket-internet

Any update @mogren?

max-rocket-internet avatar Aug 04 '20 09:08 max-rocket-internet

@max-rocket-internet Hey, sorry for the lack of updates on this. Been out for a bit without much network access, so haven't been able to track this one down. I agree that there is no config change between v1.6.2 and v1.6.3, but since v1.5.x, we have updated the readiness and liveness probe configs.

Between kubernetes 1.15 and 1.16 kube-proxy has changed, so that could be related. We have not yet been able to reproduce this yet doing master upgrades.

mogren avatar Aug 04 '20 21:08 mogren

We had the same problem. Updating master and nodegroups from 1.15 to 1.16. We had to fix the version of kubeproxy (kube-proxy:v1.16.13 -> kube-proxy:v1.16.12) and recreate nodes

schmitz-chris avatar Aug 07 '20 13:08 schmitz-chris

^^

kubectl set image daemonset.apps/kube-proxy \
    -n kube-system \
    kube-proxy=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.16.12

marcelbirkner avatar Aug 07 '20 13:08 marcelbirkner

Just an FYI, we were encountering this issue on k8s 1.17, kube-proxy 1.16.13, aws cni 1.6.3 and 1.7.1. Turns out the issue was a bad PSP for kube-proxy that had readOnlyRootFilesystem: true. Kube proxy logs will show that it's unable to configure some iptables rules due to the readonly root fs. However this doesn't seem to crash kube-proxy. readOnlyRootFilesystem: false fixes things.

Niksko avatar Sep 01 '20 06:09 Niksko

So two suggestions now:

  1. kube-proxy should have readOnlyRootFilesystem: false
  2. Downgrade kube-proxy to kube-proxy:v1.16.12

Any confirmation from AWS about the issue and resolution?

max-rocket-internet avatar Sep 02 '20 10:09 max-rocket-internet

@max-rocket-internet @Niksko Could you please provide the kube-proxy configs that you have where this issue shows up? Is this on EKS clusters? What Kubernetes version? Do you have some custom PSP for these clusters? Are the worker nodes using a custom AMI or have some script locking iptables on startup?

Is there something you know that changed in 1.16.13 that makes starting kube-proxy take slightly longer time, triggering this issue?

mogren avatar Sep 02 '20 18:09 mogren

@mogren this is on EKS, version 1.17. We discovered this as part of adding custom PSPs to all components. No scripts locking iptables on startup, using the standard EKS AMIs.

The behaviour we were seeing was that the aws-node pod never became ready, and was crash-looping. Apologies if that caused any confusion. I think it's not unreasonable to conclude that:

  • kube-proxy sets up iptables rules that are required by aws-node
  • setting the filesystem to readonly on kube-proxy causes these rules to never be set up, so aws-node crash loops
  • a race condition between kube-proxy and aws-node could cause aws-node to come up before the iptables rules have been configured, causing an initial crash before working as normal (when kube-proxy creates the rules).

Niksko avatar Sep 03 '20 00:09 Niksko

I am facing a similar issue, aws-node pod restarts 1 time on every node startup, it will work after that. Error:

kubectl logs aws-node-f8tw6   --previous -n kube-system

Copying portmap binary ... Starting IPAM daemon in the background ... ok.
ERROR: logging before flag.Parse: E0904 13:53:37.150548       8 memcache.go:138] couldn't get current server API group list; will keep using cached value. (Get https://10.100.0.1:443/api?timeout=32s: dial tcp 10.100.0.1:443: i/o timeout)
Checking for IPAM connectivity ...  failed.
Timed out waiting for IPAM daemon to start:

EKS Version: 1.17 Platform version: eks.2 Kube-proxy: v1.17.9-eksbuild.1 aws-node: v1.6.3-eksbuild.1

I tried adding sleep in aws-node to rule that this is happening because kube-proxy is taking time to start, verified that kube-proxy started before aws-node.

tibin-mfl avatar Sep 04 '20 14:09 tibin-mfl

Hi @tibin-mfl

Can you please share cni logs - https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#troubleshoot-cni and also kube-proxy pod logs. It will help us verify why kube-proxy is having a delay start.

Thanks.

jayanthvn avatar Sep 04 '20 18:09 jayanthvn

Hi, just wanted to chime in and we're seeing the same thing. Like others have mentioned the pod seems to restart once when the node first starts up and it's fine after that. We're not using any custom PSPs.

EKS version: 1.17 AMI version: v1.17.9-eks-4c6976 kube-proxy version: 1.17.7 CNI version: 1.6.3

I can see these errors in kube-proxy logs on one of the nodes where aws-node restarted:

udpIdleTimeout: 250ms: v1alpha1.KubeProxyConfiguration.Conntrack: v1alpha1.KubeProxyConntrackConfiguration.ReadObject: found unknown field: max, error found in #10 byte of ...|ck":{"max":0,"maxPer|..., bigger context ...|":"","configSyncPeriod":"15m0s","conntrack":{"max":0,"maxPerCore":32768,"min":131072,"tcpCloseWaitTi|...
I0905 23:12:39.826265       7 feature_gate.go:243] feature gates: &{map[]}
E0905 23:12:40.388938       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
E0905 23:12:41.516857       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
E0905 23:12:43.567271       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
E0905 23:12:48.167166       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
E0905 23:12:56.325941       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
E0905 23:13:14.684106       7 node.go:124] Failed to retrieve node info: nodes "ip-10-0-212-179" not found
I0905 23:13:14.684134       7 server_others.go:140] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag
I0905 23:13:14.684150       7 server_others.go:145] Using iptables Proxier.
W0905 23:13:14.684259       7 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0905 23:13:14.684410       7 server.go:571] Version: v1.17.7
I0905 23:13:14.684773       7 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0905 23:13:14.684803       7 conntrack.go:52] Setting nf_conntrack_max to 131072
I0905 23:13:14.684850       7 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0905 23:13:14.684894       7 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0905 23:13:14.685092       7 config.go:313] Starting service config controller
I0905 23:13:14.685101       7 shared_informer.go:197] Waiting for caches to sync for service config
I0905 23:13:14.685139       7 config.go:131] Starting endpoints config controller
I0905 23:13:14.685149       7 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0905 23:13:14.785879       7 shared_informer.go:204] Caches are synced for service config
I0905 23:13:14.785932       7 shared_informer.go:204] Caches are synced for endpoints config

And this is in the aws-node logs:

{"log":"Copying portmap binary ... Starting IPAM daemon in the background ... ok.\n","stream":"stdout","time":"2020-09-03T11:06:26.418457689Z"}
{"log":"Checking for IPAM connectivity ... ok.\n","stream":"stdout","time":"2020-09-03T11:06:46.458122639Z"}
{"log":"Copying additional CNI plugin binaries and config files ... ok.\n","stream":"stdout","time":"2020-09-03T11:06:46.474182395Z"}
{"log":"Foregrounding IPAM daemon ... \n","stream":"stdout","time":"2020-09-03T11:06:46.474202946Z"}
{"log":"ERROR: logging before flag.Parse: W0903 14:22:54.564615       9 reflector.go:341] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 109452564 (109453466)\n","stream":"stderr","time":"2020-09-03T14:22:54.564769814Z"}
{"log":"ERROR: logging before flag.Parse: W0903 18:30:26.713005       9 reflector.go:341] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 109555468 (109679596)\n","stream":"stderr","time":"2020-09-03T18:30:26.713161405Z"}
{"log":"ERROR: logging before flag.Parse: W0903 18:45:56.655601       9 reflector.go:341] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 109679596 (109687399)\n","stream":"stderr","time":"2020-09-03T18:45:56.655715674Z"}

It seems like this has started happening for us as part of the 1.17 upgrade, we haven't restarted all our nodes since the upgrade and I can see that the pods that are still running (on AMI v1.16.12-eks-904af05) the aws-node pod didn't restart:

aws-node-26tfq                               1/1     Running   1          10h
aws-node-2pnwq                               1/1     Running   1          3h33m
aws-node-4f52v                               1/1     Running   1          4d22h
aws-node-5qsll                               1/1     Running   1          5d22h
aws-node-6z6wq                               1/1     Running   0          40d
aws-node-92hvs                               1/1     Running   0          40d
aws-node-c8srx                               1/1     Running   1          5d22h
aws-node-chkhb                               1/1     Running   1          5d4h
aws-node-djlkb                               1/1     Running   0          40d
aws-node-g7drp                               1/1     Running   1          5d5h
aws-node-g9rgn                               1/1     Running   0          40d
aws-node-gbdq5                               1/1     Running   1          2d22h
aws-node-gc5zl                               1/1     Running   1          2d22h
aws-node-hc48d                               1/1     Running   1          5d22h
aws-node-hx9bl                               1/1     Running   1          24d
aws-node-j9dcn                               1/1     Running   1          39d
aws-node-jj4qs                               1/1     Running   1          2d22h
aws-node-kwbjl                               1/1     Running   1          153m
aws-node-ljcv8                               1/1     Running   1          39d
aws-node-lv74f                               1/1     Running   1          12d
aws-node-q2w2w                               1/1     Running   1          2d22h
aws-node-s7qw4                               1/1     Running   1          2d22h
aws-node-tck8w                               1/1     Running   1          5d4h
aws-node-tjhtf                               1/1     Running   1          2d22h
aws-node-tzpb2                               1/1     Running   0          40d
aws-node-vm4nh                               1/1     Running   1          2d22h
aws-node-xnnj2                               1/1     Running   2          153m
aws-node-zchs9                               1/1     Running   1          2d22h

I'm happy to share the full logs if they're helpful, just give me an email address to send them!

mogopz avatar Sep 06 '20 09:09 mogopz

Hi @mogggggg

Thanks for letting us know. Please kindly share the full logs from the log collector script https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#troubleshoot-cni and also kube-proxy pod logs. You can email it [email protected].

Thanks.

jayanthvn avatar Sep 06 '20 19:09 jayanthvn

Hi @mogggggg

Thanks for sharing the logs. Will review and get back on that asap.

jayanthvn avatar Sep 07 '20 05:09 jayanthvn

Could you please provide the kube-proxy configs that you have where this issue shows up?

It's default EKS

Is this on EKS clusters?

Yes

What Kubernetes version?

v1.17.9-eks-4c6976

Do you have some custom PSP for these clusters?

Nope

Are the worker nodes using a custom AMI or have some script locking iptables on startup?

AMI is v20200723 and no custom scripts or except adding some users in user-data.

max-rocket-internet avatar Sep 07 '20 08:09 max-rocket-internet

I am facing a similar issue, aws-node pod restarts 1 time on every node startup, it will work after that. EKS Version: 1.17 Platform version: eks.2 Kube-proxy: v1.17.9-eksbuild.1 aws-node: v1.6.3-eksbuild.1

@jayanthvn Just wanted to know does aws-node pod depend on Kube-proxy or vice versa?

tibin-mfl avatar Sep 07 '20 08:09 tibin-mfl

@tibin-mfl Yes, the CNI pod (aws-node) needs kube-proxy to set up the cluster IPs before it can start up.

mogren avatar Sep 08 '20 00:09 mogren

Hi @mogggggg

Sorry for the delayed response, as you have mentioned it looks like kube-proxy is waiting to retrieve node info and during that time frame aws-node starts and is unable to communicate to the API Server because iptables isnt updated and hence it restarts. I will try to repro and we will see how to mitigate this issue.

Thanks for your patience.

jayanthvn avatar Sep 10 '20 01:09 jayanthvn

Currently, the workaround is adding a busybox as Init Container to wait for the kube-proxy start.

  initContainers:
  - name: init-kubernetes-api
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup kubernetes.default.svc.cluster.local ${KUBE_DNS_PORT_53_TCP_ADDR}; do echo waiting for kubernetes Service endpoint; sleep 2; done"]

focaaby avatar Sep 21 '20 01:09 focaaby

I added the initContainer to my aws-node, as a temporary fix, the same was suggested from aws support as well. Now the problem is sometimes aws-node take more than 5 minutes to be up.

sum by(daemonset, namespace) (kube_daemonset_status_number_unavailable{job="kube-state-metrics",namespace=~"kube-system"}) >0
Screenshot 2020-09-25 at 7 33 07 AM

tibin-mfl avatar Sep 25 '20 02:09 tibin-mfl

@tibin-mfl thanks for reporting this, that is definitely concerning. Do you have kube-proxy logs from any of these nodes? It would be very interesting to see why kube-proxy was taking that long to start up!

mogren avatar Sep 25 '20 05:09 mogren

Hi @jayanthvn, just wondering if you've had any luck narrowing down what could be causing this - I can gather more data on my end if that's helpful?

We use prometheus-operator which has a built-in alert that triggers when a pod restarts X number of times - unfortunately this issue causes the alert to trigger when we scale up a number of nodes.

mogopz avatar Sep 28 '20 03:09 mogopz

@tibin-mfl thanks for reporting this, that is definitely concerning. Do you have kube-proxy logs from any of these nodes? It would be very interesting to see why kube-proxy was taking that long to start up!

@mogren I attached Kube-proxy error log

{"log":"udpIdleTimeout: 250ms: v1alpha1.KubeProxyConfiguration.Conntrack: v1alpha1.KubeProxyConntrackConfiguration.ReadObject: found unknown field: max, error found in #10 byte of ...|ck\":{\"max\":0,\"maxPer|..., bigger context ...|\":\"\",\"configSyncPeriod\":\"15m0s\",\"conntrack\":{\"max\":0,\"maxPerCore\":32768,\"min\":131072,\"tcpCloseWaitTi|...\n","stream":"stderr","time":"2020-09-04T13:53:07.16323908Z"}
{"log":"I0904 13:53:07.163201       1 feature_gate.go:243] feature gates: \u0026{map[]}\n","stream":"stderr","time":"2020-09-04T13:53:07.164205369Z"}
{"log":"E0904 13:53:07.657821       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:07.657910856Z"}
{"log":"E0904 13:53:08.817715       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:08.817826081Z"}
{"log":"E0904 13:53:11.028140       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:11.028236757Z"}
{"log":"E0904 13:53:15.789086       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:15.789186713Z"}
{"log":"E0904 13:53:24.954484       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:24.954581764Z"}
{"log":"E0904 13:53:43.712021       1 node.go:124] Failed to retrieve node info: nodes \"ip-172-31-215-37.xx.com\" not found\n","stream":"stderr","time":"2020-09-04T13:53:43.712148589Z"}
{"log":"I0904 13:53:43.712044       1 server_others.go:140] can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag\n","stream":"stderr","time":"2020-09-04T13:53:43.712191645Z"}
{"log":"I0904 13:53:43.712063       1 server_others.go:145] Using iptables Proxier.\n","stream":"stderr","time":"2020-09-04T13:53:43.712196787Z"}
{"log":"W0904 13:53:43.712165       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic\n","stream":"stderr","time":"2020-09-04T13:53:43.712204824Z"}
{"log":"I0904 13:53:43.712828       1 server.go:571] Version: v1.17.9\n","stream":"stderr","time":"2020-09-04T13:53:43.712928066Z"}
{"log":"I0904 13:53:43.713270       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072\n","stream":"stderr","time":"2020-09-04T13:53:43.713356276Z"}
{"log":"I0904 13:53:43.713305       1 conntrack.go:52] Setting nf_conntrack_max to 131072\n","stream":"stderr","time":"2020-09-04T13:53:43.713403074Z"}
{"log":"I0904 13:53:43.713538       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\n","stream":"stderr","time":"2020-09-04T13:53:43.714641593Z"}
{"log":"I0904 13:53:43.713592       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\n","stream":"stderr","time":"2020-09-04T13:53:43.714670464Z"} a

Complete kubeproxy log https://pastebin.pl/view/3d4cc276 attached here

tibin-mfl avatar Sep 28 '20 07:09 tibin-mfl

Hi @mogggggg

Sorry I didn't get a chance to repro the issue with higher node scale. As you mentioned even we are suspecting kube-proxy is not able to get the node info when the node scale is high. I will try to repro it this week. Can you please let me know while scaling the nodes at what node scale do you see the alert and how many number nodes you scale at a time? Also are these spot instances? Next time when you repro, can you please increase the debug level of kube-proxy - --v=5

Thanks for your patience and help on this issue.

jayanthvn avatar Sep 28 '20 16:09 jayanthvn

Hi @jayanthvn, I don't think the amount of nodes in the cluster is playing a particular role - our smallest cluster is 10 nodes and the restarting of the aws-node pod happens just the same as it does on one of our larger clusters (which is 40 nodes). We have four clusters total, two of which are 100% spot instances and two that are 100% on-demand this issue happens on all of them.

I've just emailed you a copy of the cni-troubleshoot script output with kube-proxy logging turned up - let me know if I can do anything else!

mogopz avatar Sep 29 '20 01:09 mogopz