MultiValidatingHandler returns as soon as one handler fails, making it impossible to follow convention of reporting all validation errors
The prevailing convention in Kubernetes resource validation is to report all validation errors, not only the first discovered error. The built-in API types follow this convention (example). The OpenAPI and CEL rule validation follows this convention, too. Most webhooks I have seen also follow it.
We provide a utility function for webhook authors that executes multiple validators: https://github.com/kubernetes-sigs/controller-runtime/blob/6ad5c1dd4418489606d19dfb87bf38905b440561/pkg/webhook/admission/multi.go#L90-L95
It returns as soon as one validator fails. That means that subsequent validators are not called, and any errors they might discover are not reported.
I think we should provide an alternative implementation that calls all validators, even if some fail, and aggregates their errors.
Also, because the existing utility function does not follow the convention, I think we should consider deprecating it.
An implementation that aggregates errors from multiple handlers would have to merge handler responses., and I don't see a unambiguous way to do that.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
It turns out that the API server itself does not return errors from all validating webhooks:https://github.com/kubernetes/kubernetes/issues/132637.
We're working on a solution to that. Once we're done, I think we can apply what've learned here.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale