kubectl
kubectl copied to clipboard
"kubectl rollout history --revision=n" produces wrong/inconsistent output with "-o yaml"?
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl rollout
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"201
9-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"201
8-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration: GCP n1-highcpu-2 (2 vCPUs, 1.8 GB memory)
- OS (e.g. from /etc/os-release): Ubuntu 16.04.6 LTS (Xenial Xerus)
- Kernel (e.g.
uname -a):Linux node0 4.15.0-1027-gcp #28~16.04.1-Ubuntu SMP Fri Jan 18 10:10:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux - Install tools: kubeadm
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2
019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
- Others:
What happened: I was learning the basic of rolling update on a DaemonSet. The template is very simple ...
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ds-one
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
system: DaemonSetOne
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
After I used kubectl set image ds ds-one nginx=nginx:1.12.1-alpine to flip the image between nginx:1.9.1 and 1.12.1-alpine back and forth a few times (and deleted the pods to get them updated), I run kubectl rollout history daemonset ds-one to check the rollout history ...
daemonset.extensions/ds-one
REVISION CHANGE-CAUSE
3 <none>
4 <none>
Then I use kubectl rollout history daemonset ds-one --revision=3 and ... --revision=4 to check the details of each revision.
daemonset.extensions/ds-one with revision #3
Pod Template:
Labels: app=nginx
system=DaemonSetOne
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
daemonset.extensions/ds-one with revision #4
Pod Template:
Labels: app=nginx
system=DaemonSetOne
Containers:
nginx:
Image: nginx:1.12.1-alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
However, when I repeated the same two commands with extra "-o yaml", I now got the exact same results which say - image: nginx:1.12.1-alpine (the latest revision) regardless which revision I specified in the command.
What you expected to happen:
The help says -o only applies to the output format while --revision shows the details. So using -o together with --revision should not change the details produced by --revision option I reckon?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Encountered this issue right now. Makes very hard to rollback precisely to old revisions if you cannot see what image it is referring to.
Thanks @stefaneg for your follow-up. I was almost convinced my bug report was talking rubbish as nobody gave it a damn!
Interestingly it gave me the right image info if the -o yaml option is omitted. Which is probably the reason why people do not care more about this.
kubectl rollout history deployment.v1.apps/myown-deployment --revision=182 | grep Image
New deployment revisions may be due to other things than image updates though, so this is still inconvenient.
Hi @stefaneg , maybe my bug report was poorly written then. I did mean the command gave me the right image version info without -o yaml -
- With
-o yaml, I got the same version number regardless which revision I was checking. - Without
-o yaml, I got the correct version number corresponding to the revision I was checking.
/remove-lifecycle rotten /sig cli /area kubectl /kind bug /priority P2
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Is this problem solved?
I am noticing this behavior with daemonsets and -o yaml as well on 1.20.9. Regular output (no -o specified) produces the correct information.
Same here with 1.20.7:
- With -o yaml, I got the same version number (the most recent) regardless which revision I was checking.
- Without -o yaml, I got the correct version number corresponding to the revision I was checking.
@weixiao619: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@waynesi: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issue reopened as it seems (finally) gathering some interest from other people.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I believe I have been seeing this issue was well. When using the -o yaml I seem to no longer get it for the revision requested but rather for the current revision. Anyone able to get past this? Had some spikes in a dev environment and wanted to see what commit started it...
Man, I don't know what to do with this ticket tbh. Every time it got some interest immediately after the bot auto-closed this. The bot says "The Kubernetes project currently lacks enough active contributors", so I reckon it means no chance to get it solved unless one would like to contribute. I'm not a Golang dev and I don't understand Kubernetes internal unfortunately.
Man, I don't know what to do with this ticket tbh. Every time it got some interest immediately after the bot auto-closed this. The bot says "The Kubernetes project currently lacks enough active contributors", so I reckon it means no chance to get it solved unless one would like to contribute. I'm not a Golang dev and I don't understand Kubernetes internal unfortunately.
Same. I tried to find it in the code a few weeks ago, and I couldn't. I just don't know Go.
/reopen
@maxweiss-74656: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Okay, good to know I am at least not crazy lol. I kept walking through each revision with the -o yaml flag and looking at the creationTimeStamp and image and was like that is weird...why did we do the same thing so many times. Then I noticed the revision was staying the exact same hah.
If I can get some free time I can try to poke around and see if I can find anything. Any good places to start or helpful links to jump start my search?
/reopen
@waynesi: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
If it is still an issue we can keep it open. Make sure you 👍 the issue as that helps bring attention to it.
It should at least get triaged. I'll see if this can be added to the list for the next bug review
/reopen
Guys I reopened this ticket again. Hope some magic may happen this time.