amazon-vpc-cni-k8s
amazon-vpc-cni-k8s copied to clipboard
Non fatal but persistent warning: "Failed to create pod sandbox ... failed to assign an IP address to container."
Every pod that we startup gets this one Warning level event one time.
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "containerString": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: failed to assign an IP address to container
Here are all the events for that pod.
6m2s Normal Scheduled Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Successfully assigned content-api/engine-content-cars-etl-cron-short-28809865-2dhw7 to ip-172-21-171-235.ec2.internal
6m2s Normal SecurityGroupRequested Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Pod will get the following Security Groups [sg-040fe161363814f9b sg-04df5482042d9290e]
6m2s Warning FailedCreatePodSandBox Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8fd6567d166d83aa85a4ee6dc055876106e2abb8b4678180607ea5800dec5001": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: failed to assign an IP address to container
6m2s Normal SuccessfulCreate Job/engine-content-cars-etl-cron-short-28809865 Created pod: engine-content-cars-etl-cron-short-28809865-2dhw7
6m1s Normal ResourceAllocated Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Allocated [{"eniId":"eni-0f225e0d5531c7cce","ifAddress":"0e:f1:ce:61:9c:d7","privateIp":"172.21.189.168","ipv6Addr":"","vlanId":2,"subnetCidr":"172.21.160.0/19","subnetV6Cidr":""}] to the pod
6m1s Normal Pulled Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Container image "docker.io/istio/proxyv2:1.22.3" already present on machine
6m1s Normal Created Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Created container istio-validation
6m1s Normal Started Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Started container istio-validation
6m Normal Created Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Created container istio-proxy
6m Normal Started Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Started container istio-proxy
6m Normal Pulled Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Container image "docker.io/istio/proxyv2:1.22.3" already present on machine
5m58s Normal Pulled Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Container image "534287151633.dkr.ecr.us-east-1.amazonaws.com/engine-content-etl:0.2.4" already present on machine
5m58s Normal Created Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Created container engine-content-cars-etl-cron-short
5m58s Normal Started Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Started container engine-content-cars-etl-cron-short
5m41s Normal Killing Pod/engine-content-cars-etl-cron-short-28809865-2dhw7 Stopping container istio-proxy
To date, this has seems just like noise. But it happens whether this is a pod on a new host or existing.
Config params for aws-cni are
{
init = {
env = {
DISABLE_TCP_EARLY_DEMUX = "true"
}
}
env = {
# Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
ENABLE_PREFIX_DELEGATION = "true"
WARM_PREFIX_TARGET = "1"
ENABLE_POD_ENI = "true",
POD_SECURITY_GROUP_ENFORCING_MODE = "standard",
AWS_VPC_K8S_CNI_EXTERNALSNAT = "true"
}
}
This is on a 1.30 EKS cluster.
CNI image is: 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.18.3-eksbuild.3
Again, this seems non-fatal, but I'd love to eliminate warnings from my clusters events if possible
Thanks