kubectl
kubectl copied to clipboard
Case sensitiveness of --restart flag related values
What happened:
With kubectl commands we can provide the --restart
flag values.
Therefore we can run below command with the above mentioned flag.
kubectl run busybox --image=busybox --restart=never
But executing above command gives the error -> error: invalid restart policy: never
Considering above executed this command -> kubectl run busybox --image=busybox --restart=Never
It worked without any errors.
What you expected to happen:
Isn't it better to support both never
as well as Never
?
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Kubernetes client and server versions (use
kubectl version
): Kubectl version related details
(client details)
Major:"1", Minor:"27", GitVersion:"v1.27.2"
(server details)
Major:"1", Minor:"27", GitVersion:"v1.27.4"
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release
):
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think the case sensitivity is legit and the present basis of error is reasonable. there are three imagePolicy: Never, OnFailure, Always
Rather than removing case sensitivity , we can improve the error message.
The error message is from here
It can be change to
"invalid restart policy: %s, Value can be either one from- Never, Always, OnFailure"
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
PR for the issue : #121570
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale