controller-runtime
controller-runtime copied to clipboard
wanted: mechanism to explicitly disable webhook server of a manager
AFAICT the webhook server is automatically started, based on the presence of hook registration calls. In an effort to guard against libraries setting up webhooks on a manager, or at least to be able to detect when they rely on such behavior, I'd like to be able to explicitly disable the webhook server to flag hook-registration attempts. One idea I had was to choose a bad port for the webhook server to trigger fail-fast behavior.
Current state appears to be that when the webhook server port is left at 0 in the config, it is upgraded to 9443 (the "default" webhook server port). So, setting the port to 0 won't disable the webhook server. In fact, any number 0 or less will result in the port 9443 being chosen - because the code implements a lower bound on the port setting. However, there appears to be no upper bound on the port setting.
As a workaround, in order to achieve my objective, I'm setting the port number to "max int", which is far above the maximum port number allowed by the networking stack. Attempts to bind on this port fail, which causes the fail-fast behavior that I want. But it's also kind of ugly and not very intuitive.
It would be nice if CR provided a nicer API to achieve this objective. Otherwise, I worry that it will only be a matter of time until perhaps someone implements an upper bound on the port, defaulting to 9443 when the limit is exceeded - and therefore obviating my workaround.
Slack thread
/help
If 0 already has a meaning, maybe use a negative value for disabling?
@alvaroaleman: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
If 0 already has a meaning, maybe use a negative value for disabling?
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/help
If 0 already has a meaning, maybe use a negative value for disabling?
Using a negative value this way would, I think, be breaking behavior since the current impl translates negative values to the default of 9443. Not sure if anyone in the wild is leveraging this.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
On Wed, May 19, 2021, 5:24 PM fejta-bot @.***> wrote:
Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community. /lifecycle stale
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/controller-runtime/issues/1384#issuecomment-844484755, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAR5KLB6ENFZOX2SLSTIMYTTOQUBDANCNFSM4XYNWHDQ .
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
On Tue, Aug 17, 2021, 7:42 PM Kubernetes Triage Robot < @.***> wrote:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
- After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage https://www.kubernetes.dev/docs/guide/issue-triage/
Please send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community.
/lifecycle stale
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/controller-runtime/issues/1384#issuecomment-900702403, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAR5KLGQ75KHQP7QF2IB4BDT5LXXNANCNFSM4XYNWHDQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .
/lifecycle frozen