python
python copied to clipboard
`ExecProvider` bug when accessing `stderr` before `None` type check
What happened (please include outputs or screenshots):
When authentication method is exec
and the provider is aws
, the command can sometimes return non-zero status without stderr
. Below is an example of the output ot config.load_kube_config()
in this case:
The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
ERROR:root:'NoneType' object has no attribute 'strip'
in this case the command returns a 255
return code with stderr: None
. This causes ConfigException
to not be raised.
Proposing a change to kubernetes/base/config/exec_provider.py
as such:
--- a/kubernetes/base/config/exec_provider.py
+++ b/kubernetes/base/config/exec_provider.py
@@ -80,8 +80,8 @@ class ExecProvider(object):
exit_code = process.wait()
if exit_code != 0:
msg = 'exec: process returned %d' % exit_code
- stderr = stderr.strip()
if stderr:
+ stderr = stderr.strip()
msg += '. %s' % stderr
raise ConfigException(msg)
try:
What you expected to happen:
ConfigException
should be raised if exec fails
How to reproduce it (as minimally and precisely as possible): Use exec auth configuration with non-valid SSO creds.
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version
): v1.28.3 - OS (e.g., MacOS 10.13.6): macOS 13.6
- Python version (
python --version
): 3.11.6 - Python client version (
pip list | grep kubernetes
): 28.1.0
@jonoden Would you like to send a PR? Thanks
/assign @jonoden
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@roycaihw, I'm happy to send a PR.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.