API server warnings via Helm
Hi! From my observations it seems that Helm doesn't support API server warnings (https://kubernetes.io/blog/2020/09/03/warnings/) via its install/upgrade operations, and via the usage with these with --dry-run=server
It's something that would be useful for identifying potential issues flagged as warnings by K8s at plan or even at apply time.
The use case I've come across for this is that the Pod Security Admission (https://kubernetes.io/docs/concepts/security/pod-security-admission/) configuration on namespaces with pods that violate its policy can be disruptive if applied in enforced mode. Our self-serviced namespaces (managed as templates) have opt-in features that would ideally enforce a stricter policy, and we'd like the CI plan stage to only complete if all pods within the namespace are compliant.
kubectl allows these warnings to flow to the user, plus with the use of --warnings-as-errors flag it can be used in the automation for failing a job:
$ kubectl --dry-run=server label --overwrite ns/foo pod-security.kubernetes.io/enforce=restricted
Warning: existing pods in namespace "foo" violate the new PodSecurity enforce level "restricted:latest"
Warning: foo-pod (and 1 other pod): seccompProfile
Warning: bar-pod (and 1 other pod): allowPrivilegeEscalation != false, unrestricted capabilities, seccompProfile
namespace/foo labeled (server dry run)
error: 3 warnings received
$ echo $?
1
Thanks!
Nice spot, and thanks for the suggestion. I presume kubectl is receiving these warnings as e.g. a response header or response metadata. Helm also uses client-go, and so implementing the like functionality to print/log these warnings seems reasonable (note: we would need to consider how any changes might affecting even stdout format of the Helm CLI for compatibility). A PR would be welcome please.
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.