kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Include Namespace in `kubectl delete` Dry-Run Output

Open totegamma opened this issue 1 year ago • 7 comments

What would you like to be added:

Prefix namespace to kubectl delete dry-run result.

current:

$ kubectl delete -f ... --dry-run=server
deployment.apps "myapp" deleted (server dry run)
$

proposed:

$ kubectl delete -f ... --dry-run=server
myapp-namespace deployment.apps "myapp" deleted (server dry run)
$

Why is this needed:

The current output is ambiguous, as the resource name should be identifiable as unique.

When working with multiple namespaces like "myapp-prod" and "myapp-dev", and intending to tear down some resources in "myapp-dev", the command might look like this:

$ kustomize build . | kubectl delete -f - --dry-run=server
deployment.apps "kube-prometheus-stack-kube-state-metrics" deleted (server dry run)
$

From this output, it's unclear whether the manifest targets "myapp-dev" or "myapp-prod". This ambiguity requires additional checks to ensure the correct namespace is being targeted.

Printing the namespace in the dry-run output would enhance clarity and confidence in identifying the targeted resources.

Other considerations This change can be applied for other operations like apply and replace. However, non-delete operations can be validate with the "diff" command. Therefore, I think it is acceptable to add this feature only for the delete operation.

sample implementation would be like this: https://github.com/totegamma/kubernetes/commit/65c18816d3bc8b47810d1230bbf88e8aef219a5e

If this issue accepted, I want to get assigned and make a PR.

totegamma avatar Jul 08 '24 09:07 totegamma

Hi @totegamma , The dry run in fact checks the namespace, it is just not notified in output ,If you try to delete a pod which exist in other namespace the error will come out right . Adding namespace like pod <name> (ns) deleted (server dry run) may add more clarity. however the developers usually knows that lil detail already.

Ritikaa96 avatar Jul 11 '24 06:07 Ritikaa96

Hello @Ritikaa96,

Thank you for your reply.

Yes, I know there are no problems with the internal mechanism. I just want to notify the namespace in the output for clarity. Hiding the namespace is a little unkind.

When the applied manifest is too big, it is hard to grasp all resources. This often happens when we use generators such as Helm charts or Kustomize. We can check the manifest with other commands such as grep, but since kubectl already has a dry-run mode, it would be nice to print the namespace for clarity.

totegamma avatar Jul 11 '24 15:07 totegamma

/triage accepted /good-first-issue

mpuckett159 avatar Jul 17 '24 20:07 mpuckett159

@mpuckett159: This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to this:

/triage accepted /good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 17 '24 20:07 k8s-ci-robot

/assign

totegamma avatar Jul 18 '24 01:07 totegamma

@totegamma Have you started working on it, or do you still plan to? If not, I'd love to take a crack at it, as it looks like a great first issue.

Thanks

rezkam avatar Aug 05 '24 10:08 rezkam

@rezkam Thanks for checking in! I’ve just finished up a busy task and was about to start working on this issue. If anything changes, I’ll let you know.

totegamma avatar Aug 05 '24 10:08 totegamma

hi @totegamma can I take this task? looks like good first issue.

anadisky17 avatar Mar 20 '25 17:03 anadisky17

Hi @anadisky17 , thank you for your interest in this issue. There’s already a PR https://github.com/kubernetes/kubernetes/pull/126619 that addresses it, but it didn’t receive any triage from the maintainers and ended up being automatically closed. It would be great if you could take a look at that PR first. Thanks again, and feel free to let me know if you have any questions.

totegamma avatar Mar 21 '25 00:03 totegamma

Hi @mpuckett159 , thanks for your previous triage on this issue. There’s renewed interest, but my PR https://github.com/kubernetes/kubernetes/pull/126619 was automatically closed when it didn’t get additional follow-up. I still think it’s a valuable improvement and would like to see it move forward. Should we reopen that PR? I’m prepared to maintain it if it’s still relevant. Let me know what you think!

totegamma avatar Mar 21 '25 00:03 totegamma