hierarchical-namespaces
hierarchical-namespaces copied to clipboard
Enable per namespace by annotation
I would like to suggest an alternative way of selecting which namespaces to enable HNC for by a label instead of the --excluded-namespace and --included-namespace-regex.
We create namespaces from a Terraform module, but only want to enable HNC for a subset of all our namespaces.
For me it would feel most intuitive to add a toggle to the Terraform module which adds a label to the namespace (e.g. hnc.x-k8s.io/enabled = "true"). The HNC would be installed with a flag like --include-namespaces-by-label. So the controller would only pick up the namespaces with the label and ignore all others.
Please consider this for future development.
EDIT: I just found the hnc.x-k8s.io/included-namespace label, which basically does what I want, except that it can't be used manually (meaning separately from the include/exclude flags) , from what I understand.
EDIT2:
I tried combining --included-namespace-regex="" with manually setting hnc.x-k8s.io/included-namespace: "true" on a namespace which resulted in:
Could not create subnamespace anchor.
Reason: admission webhook "subnamespaceanchors.hnc.x-k8s.io" denied the request: subnamespaceanchors.hnc.x-k8s.io "test" is forbidden: cannot create a subnamespace in the unmanaged namespace "test" (does not match the regex set by the HNC administrator: `^""$`)
So I really would love to have some kind of --include-namespaces-by-labelflag for the controller, that changes this behaviour.
@norman-zon Thanks for this suggestion! The reason the labels are an output of HNC, not an input, is that we don't want anyone who can set labels on namespaces to be able to turn off enforcement by HNC. I suppose that for clusters where most users aren't allowed to modify namespaces, this might not be so important - is that the case you're in?
If so, you could modify this webhook (flag-controlled) to turn off the behaviour of setting included-namespace by default, and allow the user to set it themselves. I'd be happy to accept that for a future release if you want to make a PR - if you do this, please update the docs as well, together with the warnings about the security hole this creates.
@adrianludwin Thanks for explaining the rationale behind this design approach. You are right, our usecase would be a cluster where users can't manually modify namespaces. All namespace changes have to be done through terraform via PRs. So the security impact would be tolerable.
I will have a look into the code. But I am not sure if I will be able to implement the changes myself, as my experience with Go is very limited.
Sounds good! Unfortunately I don't have the bandwidth to make these kinds of changes myself but I can help review anything you send me.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.