controller-tools
controller-tools copied to clipboard
controller-gen feature request: support objectSelector in webhook config generation
Currently (v0.5.0) the +kubebuilder:webhook marker does not support setting an objectSelector on a webhook. objectSelector is especially handy when mutating or validating core types. For example, I want to be able to set an environment variable on a subset of pods in a particular namespace, but not my controller manager pod, which runs in the same namespace and runs the webhook server.
I realize that the docs recommend running the webhook server in a separate namespace and scoping the validation to the target namespace to avoid deadlock, but that's not feasible for various reasons. Setting a label on the pods I want to mutate works perfectly fine but there's no way to do it (that I know of) without breaking my workflow, which involves generating the MutatingWebhookConfiguration with markers and patching it with kustomize in one shot. (I'm using a lightly modified version of what kubebuilder init provides.) I could patch the objectSelector field if it were there in the generated config.
I propose something as simple as an optional objectSelectorLabel=string that generates the following:
objectSelector:
matchLabels:
foo: patchMe
Where foo is the user-provided string and patchMe is some default that the user can patch.
I'm happy to put something together if there's an appetite for it.
This is solvable with a strategic merge patch in kustomize:
webhooks:
- $patch: merge
name: my-webhook-config
objectSelector:
matchLabels:
foo: bar
But a marker parameter sure would be nice. :)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Still unimplemented. Patches via kustomize are not always viable (when e.g. the source of truth is in *.go files).
To me it looks like it would boil down to extending the config (and parsing rules): https://github.com/kubernetes-sigs/controller-tools/blob/76b24b2709ba146f7794941ac0cdcb437144f64a/pkg/webhook/parser.go#L61-L136 but we'd need to agree on the API (since the nested structure of objectSelector makes it a bit more nuanced to "get right" than other fields.
I'd be willing to contribute this if we have any idea how we'd like to API to look like.
A quick draft that I can come up with:
// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchExpressions[key=testdata.com/validation;operator=Exists]
// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchLabels[testdata.com/validation=myvalue]
// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchExpressions[key=testdata.com/validation;operator=In;values=[first;second]]
but I'm not sure if that's the way forward.
cc: @sbueringer
/reopen /remove-lifecycle rotten
@pmalek: Reopened this issue.
In response to this:
Still unimplemented. Patches via kustomize are not always viable (when e.g. the source of truth is in *.go files).
To me it looks like it would boil down to extending the config (and parsing rules): https://github.com/kubernetes-sigs/controller-tools/blob/76b24b2709ba146f7794941ac0cdcb437144f64a/pkg/webhook/parser.go#L61-L136 but we'd need to agree on the API (since the nested structure of
objectSelectormakes it a bit more nuanced to "get right" than other fields.I'd be willing to contribute this if we have any idea how we'd like to API to look like.
A quick draft that I can come up with:
// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchExpressions[key=testdata.com/validation;operator=Exists]// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchLabels[testdata.com/validation=myvalue]// +kubebuilder:webhook:webhookVersions=v1,verbs=create;update,path=/mytype,...,objectSelector=matchExpressions[key=testdata.com/validation;operator=In;values=[first;second]]but I'm not sure if that's the way forward.
cc: @sbueringer
/reopen /remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Any updates on this?
@pmalek I think I would add a field named "ObjectSelector" to the Config struct the type would be a copy of metav1.LabelSelector. In that copy I would simply replace all the json markers through "marker:..." and then the resulting format should be fine.
Do you see parts of that struct where it would be necessary to diverge from that?
@sbueringer That sounds good 👍 I'm not sure what will be the resulting interface but the approach is sound to me.
I can work on that but I won't have time for this in the upcoming 2 weeks (or so). If anyone beats me to it feel free to jump on this.
Sounds good!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@sbueringer This might not be that easy as the parser doesn't support struct based fields now: https://github.com/kubernetes-sigs/controller-tools/blob/5e5bc88a01cab8ebff8c9fc4d1e90bb858139186/pkg/markers/parse.go#L678-L679
I'm not sure yet but I believe we'd need some sort of a schema to know to which types to parse to (correct me if I'm wrong) and implement the StructType in the parser. IIUC this would be somewhat similar to what crd parser has https://github.com/kubernetes-sigs/controller-tools/blob/eba92f473385279628dbbe0b8ebf5111b522ef20/pkg/crd/parser.go.
/reopen /remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Sorry still have to take a look when I find some time
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.