diff does not report when keys have been removed from the spec
When removing a key from the spec, the diff doesn't report that its changed.
Lets assume I have this configuration applied to a cluster in a file called resourcequota.json:
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": "test",
},
"spec": {
"hard": {
"limits.cpu": "8"
}
}
}
Adding a new key with the following spec:
{
...
"spec": {
"hard": {
"limits.cpu": "8",
"requests.cpu": "100m"
}
}
}
$ kubecfg diff --diff-strategy subset resourcequota.json
---
- live resourcequotas test
+ config resourcequotas test
{
...
"spec": {
"hard": {
"limits.cpu": "8"
+ "requests.cpu": "100m"
}
}
}
$ kubecfg udpate resourcequota.json
Now after removing the last key, the diff thinks there is no difference:
$ kubecfg diff --diff-strategy subset resourcequota.json
---
- live resourcequotas test
+ config resourcequotas test
resourcequotas test unchanged
The expected behaviour would be to show that the key has been removed.
I believe this is because of diff-strategy=subset which ignores extra fields in the original YAML. While this is useful for not showing spurious diffs such as server generated metadata, it does mean that valid removals will not be shown.
FWIW, update now does strategic merge patch (using an annotation, like kubectl apply does).
I guess we should make kubecfg diff should do the same trick.
FWIW, update now does strategic merge patch (using an annotation, like
kubectl apply does). I guess we should make kubecfg diff should do the same trick.
I think there should still be two diff modes
- the existing mode which ignores the
last-applied-configurationannotation so that it's really comparing with the running config. - the new mode behaving like update, which uses the annotation.
I am not sure how this would work in the UI -- --diff-strategy=update?
@mkmik I'd like to attempt a PR on this if you're not currently working on it or plan to do so soon. I also think it would be great if this issue is fixed before https://github.com/bitnami/kubecfg/pull/282#issuecomment-586101118 is implemented.
@shric yes, that would be great, thanks!
PR for the discussed approach: https://github.com/bitnami/kubecfg/pull/303 Thanks.
Sorry @shric, we wanted it sooner.
No problem, thanks @gaurav517 ! Apologies.for not providing this as stated above, I changed employers in July 2020 and sadly I no longer get to use kubecfg :(