kubectl
kubectl copied to clipboard
Basic authentication was removed but kubectl config set-credentials can set basic auth infos
What happened?
Basic authentication was removed in v1.19 kubernetes/kubernetes#89069, but kubectl config set-credentials command remains, which can be used to set basic auth infos. Some users may misuse this command to save password to kubeconfig file, this will lead a password leak issue.
What did you expect to happen?
Remove the basic auth flags password, and update all referenced documents.
How can we reproduce it (as minimally and precisely as possible)?
example from https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-set-credentials-em-
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
v1.25+
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/sig auth /sig cli /sig docs
is it just need to remove the flag password
, if so I might know how to do and I will assign this issue
/transfer kubectl
I think kubectl should warn you if you try to set basic authn
I run kubectl config set-credentials cluster-admin --username=admin --password=whatwillhappenhere
which is set basic authn, and kubectl didn't warn me. and I run kubectl config view
and it seems like set the user and password in.


so I think in this place we could remove password
flag or find another way to handle it.
/triage accepted We will begin a deprecation process for this and add a warning for this.
@mpuckett159 how complicated is the deprecation process? do you think i can take this issue? thanks
I'm not 100% on the process but I believe we need to add a deprecation warning for 1 (2?) release cycles before we can actually remove the flag.
/assign
Deprecation policy for CLI is 12 months (or 2 releases, whichever is longer): https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli
Should username be deprecated as well as password mentioned in the issue. Since auth will need to be done with token. username will cause an error or be set to an empty string in the current implementation.
Should username be deprecated as well as password mentioned in the issue. Since auth will need to be done with token. username will cause an error or be set to an empty string in the current implementation.
Yeah. Double check, but I'm pretty sure username is only used for basic auth, so it should be deprecated too.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten