kubectl
kubectl copied to clipboard
git-like blame for kubectl
When you request yaml or json output you'll notice the addition of .managedFields
which contain information from server-side apply about modifications made to a particular resource. We could use that data and build git-like blame behavior for kubectl.
For example:
kubectl blame pod/foo
would print something like:
A1: apiVersion: foo/v1
A1: metadata:
controller-A: annotations:
controller-A: bla: abc
A1: spec:
A1: ...
A2: field: value
controller-B: status:
controller-B: ...
I would like to work on this interesting feature, but it is not quite obvious to me how to implement now, it might take some time.
/assign
@knight42 go for it, I'd start with this as a kubectl plugin presented at one of sig-cli calls and we can continue the discussion from there
/triage accepted /priority backlog

@soltysh Hi! I have almost implemented this interesting feature, it is not that hard as I expected.
@knight42 will you be able to attend next week's SIG-CLI call or record a demo and I'll re-play that during the call?
cc @apelisse @lavalamp
@soltysh I think I am unable to attend the meeting(due to the timezone 😢 ), but I have created a repo for this kubectl plugin: https://github.com/knight42/kubectl-blame and a demo video is available on the README.
Please feel free to give it a try and file bug reports if there is any 😉
@knight42 This is awesome. I actually shouted for joy when @apelisse shared https://asciinema.org/a/375008 with me.
One small suggestion: If you do another demo or video, add an HPA to the deployment and have the HPA modify replicas. This was one of the key motivating use cases behind server-side apply and managed fields.
Thanks again for implementing this!
That's really awesome!
I'm curious how/if you handle fields that are owned by multiple managers?
I'm also a little concerned about the output width, so I'm curious if you thought about other formats/options, including hiding date or printing relative date (X days ago
). There are only two kinds of managedfields ("Apply" and "Update"), you could try to experiment only printing a specific symbol for Apply managedFields or maybe "A" and "U", though that will be less intuitive.
I'm also curious to know if this is re-usable in another context than CLI. If I wanted to build this in a different UI, or script it somehow, how easy would it be?
Finally, I think it'd be useful to be able to extra from the objects the fields that are owned by someone specifically. For example, to retrieve the applied objects.
Thanks for working on that!
@soltysh I think I am unable to attend the meeting(due to the timezone cry ), but I have created a repo for this kubectl plugin: https://github.com/knight42/kubectl-blame and a demo video is available on the README.
Please feel free to give it a try and file bug reports if there is any wink
@knight42 sure no worries, I'll demo that in sig-cli and will let you know about the results. I'll be personally advocating to include this in 1.21 :)
@erictune I am so glad to know you like this plugin!
If you do another demo or video, add an HPA to the deployment and have the HPA modify replicas.
I have tried creating an HPA for the deployment locally, but perhaps I am missing something, the metadata.managedFields
doesn't get updated even after HPA modified the replicas. Could you reproduce that?
The version of kubectl and k8s cluster:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:09:16Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
The commands to reproduce:
kubectl create deployment nginx --image nginx:alpine
kubectl autoscale deployment nginx --max 5 --min 2
# verify the managedFields
kubectl get deploy nginx -oyaml
@apelisse Thanks for your advice! let's make this plugin better.
I'm curious how/if you handle fields that are owned by multiple managers?
I didn't expect a field could be managed by multiple managers now, and I am not sure how to handle such fields. Any suggestion?
only printing a specific symbol for Apply managedFields or maybe "A" and "U"
both Apply
and Update
are quite short IMO, I think we might be better to keep the meaning of operation obvious here.
if you thought about other formats/options, including hiding date or printing relative date (X days ago).
Sure, how about adding an option --time=full|relative|none (defaults to relative)
to control the format of time?
if this is re-usable in another context than CLI
I am still uncertain about the exact use cases. If one wants to display the result, let's say, on a web page, I think he/she might be able to take https://github.com/knight42/kubectl-blame/blob/master/cmd/marshal.go#L163 as a reference and do something similar. I guess it might take some time, like to understand my algorithm and implement it in different language.
If one wants to use https://github.com/knight42/kubectl-blame
as a library, I think I could do that and provide users with more options to allow them to control the behavior, but I need to know their needs first.
extract from the objects the fields that are owned by someone specifically.
This looks doable to me, but I need some time to investigate.
@soltysh Thanks a lot! Being able to design and implement a kubectl command is an honor to me 😄
@apelisse Hi, kubectl-blame
is able to customize time format now https://github.com/knight42/kubectl-blame#1-customize-time-format
Looks great!
Could you start iterating on a pull-request for kubectl? I'm sure we'll want that merged upstream, thanks!
Is there a specific reason this should be built in to kubectl and not kept as a plugin?
We can talk about it during the sig-meeting, but I suspect we'll want this to become a core feature of kubectl.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/lifecycle frozen
Because I like to work in a blameless / mistakes-are-OK culture, I'd prefer to have a different subcommand name than blame
.
It's OK for blame
to be a shorthand for the canonical name. Subversion VCS allowed you to use either the blame
or praise
synonyms.
Do we currently have a mechanism for command aliases in kubectl/cobra? If so, I'm more than happy to see an alias introduced for blame. @soltysh
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted