Lens can't open pod or node terminal on single cluster
Describe the bug Hello, I was upgrading EKS to 1.29 when lens suddenly stopped opening terminals for both nodes and pods. It might be a coincidence because during the process of node groups upgrade, until some point I was still able to drain nodes using node terminal in lens.
After it stopped working, node terminal showed this message: failed to open a node shell: Unable to start terminal process: CreateProcess failed
Pods terminals showed similar errors to this:
+ kubectl exec -i -t -n <ns> <pod> -c de ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:1 char:1
+ kubectl exec -i -t -n <ns> <pod> -c de ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
What is interesting, other clusters, which run 1.28 were okay. I also was able to do all the commands from my local terminal for both 1.28 and 1.29 clusters.
I thought that kubectl might be too old so I upgraded my kubectl from 1.27 to 1.29. This also didn't change anything.
On my last step, I reinstalled newest Lens IDE version, which did not help either.
Other clusters I manage are 1.28 and I didn't have issues with them even after this happened on 1.29.
I also got Lens IDE informational message like this: If terminal shell is not ready please check your shell init files, if applicable.
After trying all of these things I managed to get this error, too: Error occurred: Pod creation timed outfailed to open a node shell: failed to create node pod
To Reproduce Steps to reproduce the behavior:
- Go to nodes section
- Click on Node shell for target node
- Wait with Connecting... message until error comes in
- See error
Expected behavior This should let you enter node terminal or do prepared commands like drain/cordon/uncordon.
Environment (please complete the following information):
- Lens Version: 2024.11.131815-latest (not sure which one I used before)
- OS: Windows 10 Enterprise, Version 22H2, OS Build 19045.5131
- Installation method (e.g. snap or AppImage in Linux): .exe for Windows
Kubeconfig: Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1UQXlOakV5TVRJeE0xb1hEVE15TVRBeU16RXlNVEl4TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjRhCmFsSUxjNFdWU2M3ZkY1MFA1eFNSeUwrdkl2eW5obUV1KzVXYVhIZmNpMTcyZVlqVHdiWFN5SmozbFVneWdqVkIKVWlsNk5FVFFFYllBajVEb3RreFQ3ODBvK25heTJ3dWZLL2wwUjQyZXJrTmlCMnVCWEE0OUdRTjlieHl5Yi9PbwpGTkVQaXQ5KzVUYnlJYmtUYmo4SjdOSkdUdmxOK3BLQXNCOW9qd2RKNHlNMnZITjZwMG1YRDY1S0JqMC85dm56CncxVnRXRzNaQ2VNSk1jZjhVRk5lcEk2K3BEbjU0OUNQekgwVVFUTTZjd1k1Y0ZKQko5UjEvNitUekxjbGRJWk8KMzIybFI2djRaa1hWOTJibFE1cG1nUU4xdzhCWmpVbkVoVWN1VEQxb1FweVorajAvRzU0UHNJa0dRSXdlM2tPVQpQMnlKd3ZyUDJDTWVuT25rdStrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMRXY0SHlzM2NHbFlLMDVBdEJtcUtaWWo4Ym9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBREQwQkphTGNpNDJXUlgxK244VApVVTRGZU94aTJOSHU2aG40OS9nU0VWRGF6WEMrdlZ4NXhRY0JJejNBWTMwQitnbUVGdUNDVEIydzdMeEdGL1A0CjBBeFVUTklHZ0hrUWJ6bGdKV25Da2QybzUra1hzZUtwRWxsRzArQUVXd0x1K2daNWhYVHRsMk5TVkhtaElaVnoKdlo1SStleGYxK2NBOWNyc2w1THFLQ1J6OW00TXVFMGlFQkJ2L3ZxV25peGNsaG9HR3dETUxuL0RnNmdhSjQzNgp3M1BsZ2lpa1F2T3FZSWdpMEI5NEl3MGRmZVFJaXF5c2M0cUlIRFJWSXpiM2F1MG92T0xFcUpSWmpjTEh6ZWIrCktxcXNYQWRpc1l2dUZLRzNrSVVsVFBPeXlVdTJqaTJ4dlhabFJCS29PRnpheGZYbnhjR3dvRHlnS3hBcWtLalcKNDBvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: working-cluster
name: working-cluster
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXlOVEE0TURreU5sb1hEVE15TURVeU1qQTRNRGt5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG81ClhSZGNZcXU5WUdubStwNDBibEhnbjlCQnIrUnZsSWlZb0R2RHUzZ2hhSG5lVm9vYnV3ZExYUGdhUHlTSU5XVzMKNW52eVk0dWdYTURlT0w0ZnJMT0xTMFlTNFpFK1Qrc3p6UGJuaGp4RHk1d0VxbDRSMG53UnRrZVk0R3NHVE51YwptTkY5eEVCRGRmdWdDNG5EV0N5Z0hJSVVnRVhMLzB6dENRTlNUV3NMNFJndkRGakdIQmF3RllQbUVjWU9GNXdTCi91UjIrNDUxc0tSS3RZMjVNUWFjSjNOYmp4UmlSb3dyYy82eUREcnV2QWtxdDdSRktZMnR3YXFvWVh5ZWQvS0gKamVEVnVXNmRqMTJqTHRRUFpwbDRMdk45U242Y3pGTVRkYUJvZ285QlZQZWRFMDJ5bnZQZU56Y0xlMnFGNkRJMwpicEZvT05ZbGQ2anQwVGJXUUxFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZId1ZsT3JRWkxoSnNaMlFSQms2cmtDU29xVUtNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmpydEpnUklrck1NUGswbWVmSgorN21xbzI5cFhlSmdJYzVBRXY2bEpFZzRjcWc0Tm9vL0xtc3FLOGh6S21pNGcyamRKR3VZRGwvVlpYQXJnVHpvCkRKK0dzSnVVYnJUMTVEZzZZZUZmUEZ2bmRRSCs2TVR3akJLSUZxU1RWclBzZXRoOEVsZTRyTG9MVEdjWnpoL1gKWFNJTjYrV01oNVU2Y2JtdS9JYWlCTERGckwyblNGcmFycUtJdmNNajB3cU9teVdFMGU3QXhlZU5JTUk1aWJxbwpEaFl2MHI0enh5MVVLZXdmY2kvVnZLUjV0c1BwMG1ORjl6dE50aVIySG52UjlvVEg3akplSitnaWMwZDQ2V3Z3CnZzditGM0RnVUhCbm9QT0FlZVhqNzVuRFZ1N0oyaGk3RDFXVU9xL1J5RUJWWmtaNnZFQXo5a25OTUttbkhCQm4KTHdBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: cluster-in-question
name: cluster-in-question
contexts:
- context:
cluster: working-cluster
user: working-cluster
name: working-cluster
- context:
cluster: cluster-in-question
user: cluster-in-question
name: cluster-in-question
current-context: cluster-in-question
kind: Config
preferences: {}
users:
- name: working-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- region
- eks
- get-token
- --cluster-name
- working-cluster
command: aws
env:
- name: AWS_PROFILE
value: profile name
interactiveMode: IfAvailable
provideClusterInfo: false
- name: cluster-in-question
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- region
- eks
- get-token
- --cluster-name
- cluster-in-question
command: aws
env:
- name: AWS_PROFILE
value: profile name
interactiveMode: IfAvailable
provideClusterInfo: false
Additional details I checked one issue where you asked whether haproxy or similar tool is being used. In my case, no similar tools are being used.
+1
Hello KarooolisZi,
Thank you for reaching out to Lens support!
Thank you for reporting a bug. We are working on your issue. Stand by for further updates.
Regards, Oleksandr from Lens
Hello @okoshevka, Thank you for the swift response!
bug repeated can't pod exec issues/8113
@sm1lexops details are different
Hi All,
The same for node
Error occurred: Pod creation timed outfailed to open a node shell: failed to create node pod
also
Can confirm, this is still an ongoing issue on version 2025.1.161916-latest
is there any workaround for this?
Any news?
has any news ?