Regression in ExecProvider for AWS EKS Token Retrieval after adding shell=True in exec_provider introduced in commit 2dfa782
Regression in ExecProvider for AWS EKS Token Retrieval
Summary
After commit 2dfa782, our AWS EKS token retrieval command fails when running in a Docker container using Python 3.11. Previously, the same command executed without errors. The regression appears linked to using shell=True for Windows compatibility.
Details
-
What happened:
-
We run the following command arguments for the exec provider:
['aws', '--region', 'us-east-1', 'eks', 'get-token', '--cluster-name', 'cluster', '--output', 'json'] -
Under commit
2dfa782, the command fails. Debugging shows that when recreated withoutshell=True, it works as expected. When allowed to run as is (withshell=True), the stderr output indicates a failure. -
Screenshot from the debugger (showing stderr, the recreation of the process, and the JSON output) is included below:
-
-
What you expected to happen:
- The token retrieval command should run successfully, as in previous versions of the Python client.
-
How to reproduce it:
- Use a Docker container with Python 3.11:
FROM python:3.11 RUN pip install kubernetes - Configure AWS EKS or manually set
self.argsto:['aws', '--region', 'us-east-1', 'eks', 'get-token', '--cluster-name', 'cluster', '--output', 'json'] - Run:
from kubernetes import client, config as kubeconfig kubeconfig.load_config() - Observe that the command fails when using the latest commit that includes
shell=True.
- Use a Docker container with Python 3.11:
-
Environment:
- Docker base image:
python:3.11 - Python version: 3.11
- Kubernetes Python client version: 32.0.0
- Relevant commit: 2dfa782
- Docker base image:
Additional Information
- The issue appears tied to the introduction of
shell=Truein the code, presumably for Windows support. Removing or bypassingshell=Trueresolves the problem in a Linux-based environment.
I think this has been fixed in 32.0.1. Please upgrade and check again.
@roycaihw can you point to the commit it has been fixed? couldn't find it in the git history
I think it's https://github.com/kubernetes-client/python/pull/2340
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.