aws-cli
aws-cli copied to clipboard
aws eks update-kubeconfig fails with error TypeError: 'NoneType' object is not iterable
I just received a new MacBook with MacOS Catalina. I used the latest AWS CLI Bundle to install the AWS CLI on the MBP.
`gregory.bonk@greg-bonk-mbp:~/workspace$ unzip awscli-bundle.zip Archive: awscli-bundle.zip inflating: awscli-bundle/install inflating: awscli-bundle/packages/PyYAML-5.2.tar.gz inflating: awscli-bundle/packages/pyasn1-0.4.8.tar.gz inflating: awscli-bundle/packages/docutils-0.15.2.tar.gz inflating: awscli-bundle/packages/botocore-1.14.3.tar.gz inflating: awscli-bundle/packages/s3transfer-0.3.0.tar.gz inflating: awscli-bundle/packages/urllib3-1.25.7.tar.gz inflating: awscli-bundle/packages/python-dateutil-2.8.0.tar.gz inflating: awscli-bundle/packages/virtualenv-16.7.8.tar.gz inflating: awscli-bundle/packages/colorama-0.4.1.tar.gz inflating: awscli-bundle/packages/jmespath-0.9.4.tar.gz inflating: awscli-bundle/packages/futures-3.3.0.tar.gz inflating: awscli-bundle/packages/awscli-1.17.3.tar.gz inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz inflating: awscli-bundle/packages/six-1.14.0.tar.gz inflating: awscli-bundle/packages/setup/setuptools_scm-3.3.3.tar.gz inflating: awscli-bundle/packages/setup/wheel-0.33.6.tar.gz
gregory.bonk@greg-bonk-mbp:~/workspace$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws Running cmd: /usr/local/opt/python@2/bin/python2.7 virtualenv.py --no-download --python /usr/local/opt/python@2/bin/python2.7 /usr/local/aws Running cmd: /usr/local/aws/bin/pip install --no-binary :all: --no-cache-dir --no-index --find-links file://. setuptools_scm-3.3.3.tar.gz Running cmd: /usr/local/aws/bin/pip install --no-binary :all: --no-cache-dir --no-index --find-links file://. wheel-0.33.6.tar.gz Running cmd: /usr/local/aws/bin/pip install --no-binary :all: --no-build-isolation --no-cache-dir --no-index --find-links file:///Users/gregory.bonk/workspace/devops-application/infrastructure/awscli-bundle/packages awscli-1.17.3.tar.gz`
Checking the install I see this...
gregory.bonk@greg-bonk-mbp:~/workspace$ which aws /usr/local/bin/aws gregory.bonk@greg-bonk-mbp:~/workspace/$ /usr/local/bin/aws --version aws-cli/1.17.3 Python/2.7.17 Darwin/19.2.0 botocore/1.14.3
Then when running a command to get an EKS cluster configuration...
aws eks update-kubeconfig --name my-eks --region us-west-2
This is the error...
'NoneType' object is not iterable
In the debug I see that there is a response, coming back. Appears to be something with parsing it ?
https://gist.github.com/TechnicalMercenary/48f15680f1fcd506b256e84157f08e30
After deleting my existing ~/.kube/config file I ran the update-kubeconfig again and now it works perfectly fine.
Perhaps it was an issue with my config's format or something ? Here's what it used to look like...
$ cat ~/.kube/config apiVersion: v1 clusters: null contexts: null current-context: arn:aws:eks:us-west-2:3456345634:cluster/my-eks kind: Config preferences: {} users: null
I've stumbled upon this again, and I'm going to speculate that the AWS CLI needs to be able to handle config files a little more robustly.
Here's a couple of flows that I've come up with that causes this to fail....
Flow 1
// Start with fresh file rm ~/.kube/config
// Update with the context you want aws eks update-kubeconfig --name my-cluster --region us-west-2
// Use KubeCtl to delete the context kubectl config delete-context arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
// RE-Apply the Config aws eks update-kubeconfig --name my-cluster --region us-west-2
Tried to insert into contexts, which is a <type 'NoneType'> not a <type 'list'>
Here is my context file before the Re-Apply
clusters:
- cluster:
certificate-authority-data: {...snip...}.sk1.us-west-2.eks.amazonaws.com
name: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
contexts: null
current-context: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- my-cluster
command: aws
env: null
Flow 2
// Start with fresh file rm ~/.kube/config
// Update with the context you want aws eks update-kubeconfig --name my-cluster --region us-west-2
// Use KubeCtl to delete the context kubectl config delete-cluster arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
// RE-Apply the Config aws eks update-kubeconfig --name my-cluster --region us-west-2
'NoneType' object is not iterable
Config before re-apply
apiVersion: v1
clusters: null
contexts:
- context:
cluster: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
user: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
name: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
current-context: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-west-2:000000000000:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- my-cluster
command: aws
env: null
Apparently, the AWS CLI will need to take into account the changed behavior, see how the upstream kubernetes issue was closed: https://github.com/kubernetes/kubernetes/issues/88524
thanks all, i had same issue, solved it with rm ~/.kube/config
as well
Thanks for the report and the detailed descriptions on how to get it to fail. I'll confer with the EKS team on how to handle this, as it is a CLI customization.
Any update on this?
The problem is that if you have only one definition of a cluster, context or user in the ~/.kube/config
file and you delete one of them then kubectl
will set deleted type to null
instead of []
.
Then if you try to update-kubeconfig
using aws cli command you will see an error:
- if
contexts: null
: Tried to insert into contexts,which is a <class 'NoneType'> not a <class 'list'> - if
users: null
: Tried to insert into users,which is a <class 'NoneType'> not a <class 'list'> - if
clusters: null
: 'NoneType' object is not iterable
Simple workaround for above cases is to replace all null
with []
then run aws cli update-kubeconfig
command, like this:
sed -i 's/: null/: []/g' ~/.kube/config && \
aws eks --region ${AWS_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
Same to me, solved it with rm ~/.kube/config
and applied again aws eks update-kubeconfig
Thanks. This solved it for me too rm ~/.kube/config
. The error is too generic.
I ran into this issue on a Linux machine running Pop_OS. I had previously setup kubectl
when installing minikube
and encountered the same cryptic error message when running aws eks update-kubeconfig
. Deleting ~/.kube/config
fixed it for me locally.
My colleague stumbled on this issue.
It worked with rm ~/.kube/config
!
Thank you!
Just ran into the same Issue - deleting the ~/.kube/config
file also worked for me, thanks.
A better check/output would be appreciated
It seems to me that aws-cli should take into account having an older ~/.kube/config
configuration, a warning should help identify the problem for beginners more easily
EDIT: nevermind, had a rogue KUBECONFIG
environment variable pointing somewhere else
aws-cli/2.6.0 Python/3.9.11 Linux/5.10.102.1-microsoft-standard-WSL2 exe/x86_64.arch prompt/off
I don't have a ~/.kube/config
as it's the first run of this tool, I still get this error:
2022-04-29 10:44:45,525 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File "awscli/clidriver.py", line 459, in main
File "awscli/clidriver.py", line 594, in __call__
File "awscli/customizations/commands.py", line 191, in __call__
File "awscli/customizations/eks/update_kubeconfig.py", line 134, in _run_main
File "awscli/customizations/eks/kubeconfig.py", line 270, in insert_cluster_user_pair
File "awscli/customizations/eks/kubeconfig.py", line 217, in insert_entry
awscli.customizations.eks.kubeconfig.KubeconfigError: Tried to insert into users,which is a <class 'NoneType'> not a <class 'list'>
Tried to insert into users,which is a <class 'NoneType'> not a <class 'list'>
the problem is still happening... Is there any plans to fix this issue
This happened when setting up an EKS cluster after using MiniKube. Removing the kube config works but seems like the AWS CLI should be more robust.
I have just come across this issue. As @TechnicalMercenary temporary fix is by removing the .kube/config file and running the command again.
However, AWS should consider fixing this issue.