Kubeconfig file is not written during EKS project; exits with error
Describe the bug In certain configuration environments, the EKS build project fails to write to the kubeconfig file. The project aborts with this error:
Using AWS profile [f5] from Pulumi configuration
adding eks-sample-pulumi361-eksCluster-8dd5b98 cluster to local kubeconfig
'NoneType' object is not iterable
To Reproduce Steps to reproduce the behavior:
- In an environment where you have custom kubeconfig contexts including GCP clusters.
- Select any context.
- Run the build.
I am doing additional work to drill this down to what is happening, but so far it seems to be tied to the way the gcloud utility updates the configuration.
Expected behavior The project should run to completion w/o errors (ie, the kubeconfig should be created).
Your environment
- Linux amd64
- Master Branch
- Pulumi / AWS
Additional context None
The issue is definitely tied to GCP and the way the gcloud util updates the config:
The configuration that makes the process barf is:
apiVersion: v1
clusters: null
contexts: null
current-context: name-of-the-gcp-k8-cluster
kind: Config
preferences: {}
users: null
It's obviously missing something pulumi is expecting.
@qdzlug Is this issue still relevant?
@dekobon - yes, there is still an issue here but it's more tied to custom-contexts in a broad sense.
If you just use the default ~/.kube/config things are fine. If you have different contexts defined in separate yaml files in ~/.kube/custom-contexts it works most of the time. I'm still not 100% sure what breaks it / when it breaks. I hit it sporadically.
My plan here is to work this into a cleanup process; if you build and destroy MARA you wind up with a number of orphaned contexts in your config files. So I want to clean that up on a destroy (probably with a prompt, or requiring an argument to clean up). Then I want to overhaul the part of the code that builds out the kubeconfig so we validate it and fix it if we have issues.
Related to #7
Will be closed / remediated by #167