kubectl
kubectl copied to clipboard
kubectl config view shows auth-provider secrets
What happened:
kubectl config view
(without --raw
flag) print users[*].user.auth-provider.config.{client-secret, id-token, refresh-token}
values. Also command prints users.[*].user.password
value.
What you expected to happen:
I think this command should print DATA+OMITTED
(or REDACTED
) word like in clusters.[*].cluster.certificate-authority-data
Environment:
- Kubernetes client version (use
kubectl version
): 1.23.4 (e6c093d87ea4cbb530a7b2ae91e54c0842d8308a) - OS: macOS Montery (12.1)
/triage accepted /assign @mpuckett159
@eddiezane: GitHub didn't allow me to assign the following users: mpuckett159.
Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/triage accepted /assign @mpuckett159
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale