execution provider errors encountered while loading configuration in interactive mode generate secondary exception
What happened (please include outputs or screenshots):
Due to already-reported issues with ExecProvider in 32.0.0, loading my EKS-based kube config failed. i was doing my testing in an interactive shell, however, so what I saw is this:
>>> kubernetes.config.load_config(context='dev-admin')
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: the following arguments are required: command
ERROR:root:'NoneType' object has no attribute 'strip'
>>>
What you expected to happen:
I did not expect the 'NoneType' object has no attribute 'strip' exception.
How to reproduce it (as minimally and precisely as possible):
The following only works with python-kubernetes version 31 or older, due to the bug introduced in ExecProvider in 32.0.0 that causes it to lose the exec arguments, but the bug in question still exists in version 32 from visual inspection of the code.
$ kubectl config set-credentials test-fail --exec-api-version='client.authentication.k8s.io/v1beta1' --exec-command=sh --exec-arg=-c --exec-arg='exit 1'
$ kubectl config set-cluster test-fail
$ kubectl config set-context test-fail --cluster=test-fail --user=test-fail
$ python -c 'import kubernetes; kubernetes.config.load_config(context="test-fail")'
ERROR:root:'NoneType' object has no attribute 'strip'
$ python -c 'import kubernetes; kubernetes.config.load_config(context="test-fail")' | cat
ERROR:root:exec: process returned 1
Anything else we need to know?:
Error is caused when run with stdout pointing to a tty: in ExecProvider.run(), if is_interactive gets set to True, then the subprocess is opened with stderr pointing to sys.stderr rather than subprocess.PIPE. In the later error handling code, there is no check to see if is_interactive is True or if stderr is not None, so the code blindly calls stderr.strip() and raises the secondary exception.
Environment:
- Kubernetes version (
kubectl version):
Client Version: v1.25.4
Kustomize Version: v4.5.7
- OS (e.g., MacOS 10.13.6):
MacOS 15.3 - Python version (
python --version)Python 3.11.7 - Python client version (
pip list | grep kubernetes)
kubernetes 31.0.0
oh this is fixed by https://github.com/kubernetes-client/python/pull/2338
https://github.com/kubernetes-client/python/pull/2338 is merged.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.