hierarchical-namespaces
                                
                                
                                
                                    hierarchical-namespaces copied to clipboard
                            
                            
                            
                        Propagate parent's annotations to child without knowing their value
Hi,
My use case is the following, my parent namespace has annotations to activate or not linkerd. These annotations are not static and changes time to time. I need to propagate these annotations with their values to all sub-namespaces.
I tried using HierarchyConfiguration beta feature with argument option --managed-namespace-annotation=linkerd.io/inject
apiVersion: hnc.x-k8s.io/v1alpha2
kind: HierarchyConfiguration
metadata:
  name: hierarchy
  namespace: customer-namespaces
spec:
  annotations:
  - key: linkerd.io/inject
    value: enabled
  [...]
This configuration works for static values however it doesn't work in environment with dynamic parent namespaces list and linkerd annotations.
I also tried using HNCConfiguration with propagate, this is exactly look as what I need. But annotation are not supported.
Did I miss something or it is impossible to propagate annotation from parent to child without knowing their value? Thanks
Sorry for the delay in getting back to you.
I'm not sure I understand the problem. Who is modifying the annotations? Can it be modified to change the HierarchyConfiguration instead? The reason that HNC doesn't respect any existing annotation is because it's allowed to overwrite them at any time, so its source-of-truth is always in a HierarchyConfiguration or SubnamespaceAnchor object. However, any other controller (or human) than can modify the annotation inside those objects should be able to change them dynamically on all the descendant namespaces as well.
Can you explain this a bit more? Thanks!
Hello,
Here are some details to clarify my use case. I'm deploying hnc with flux in a cluster where the list of namespaces managed by hnc is dynamic based on customer needs.
customer1 has namespaces [a, b] customer2 has namespaces [dev, staging, prod]
I'm currently using HierarchyConfiguration resources to set the annotation values are its working well. However, I have to define 2 HierarchyConfiguration for customer1 and 3 HierarchyConfiguration for customer2, 1 for each namespace. Note that annotation values are in sync for all namespaces. Managing these specialization is difficult for our 200+ customers.
I like the concept of HNCConfiguration which allow to define object to propage from parent to child but annotations seem to be not supported through this resource. Could you confirm my understanding?
I hope this clarify my topic. Thanks
Hmm, I'm still missing something. If customer1 has namespaces [a,b], can you define a single root namespace for that customer called customer1 and define the annotations in that HierarchyConfiguration (HC)? Then you'd have one HC per customer, not one HC per namespace.
Or are you saying that all namespaces for all customers have the same annotations? In theory, you could create a single root namespace for the entire cluster, define your annotations in that HC in that root, and then they'd automatically be propagated to all namespaces (including any changes you needed to make). Would that help? Even if it's not the best user interface?
Some extra information
- each customer has their own cluster
 - customer has restricted access to only its root namespaces. e.g. customer1 [a,b] has only access to a and b and only namespaces a and b are managed with HNC to allow them to create sub-namespaces
 - All customer's namespaces have the same annotation
 
I'm starting to think about creating a common namespace managed by HNC and leaving legacy customer namespaces unmanaged by HNC. I didn't go this way first because I tried also manage legacy namespaces with HNC
Ok, so in a per-customer-cluster, isn't that even easier? Make a and b children of another namespace called cluster-root or something and specify all the annotations in cluster-root? They'll then be automatically propagated to a and b.
If you wanted, we could also consider putting the annotations in HNCConfiguration which would treat all namespaces as though they had a magic root, but they should logically be about the same. Or have I misunderstood?
Having annotations in HNCConfiguration with the magic root to replicate annotations to all managed namespace will be the best.
Ok - are you able to provide a PR? This is fairly low priority for me to implement but I can review it if you're interested in doing it. Otherwise, I recommend just adding an explicit root namespace to all your clusters, which works today in HNC v1.0.
On Wed, Aug 10, 2022 at 6:22 PM ericmenard @.***> wrote:
Having annotations in HNCConfiguration with the magic root to replicate annotations to all managed namespace will be the best.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/hierarchical-namespaces/issues/212#issuecomment-1211343737, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE43PZFETFSVNXYEKWZ3MTTVYQTT5ANCNFSM5ZDASSPQ . You are receiving this because you commented.Message ID: @.***>
Thanks I will give it a try and meanwhile use the root namespace
sg!
On Thu, Aug 11, 2022 at 9:04 AM ericmenard @.***> wrote:
Thanks I will give it a try and meanwhile use the root namespace
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/hierarchical-namespaces/issues/212#issuecomment-1211961724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE43PZEZWGG6VQG6YE6XUGTVYT25BANCNFSM5ZDASSPQ . You are receiving this because you commented.Message ID: @.***>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue as fresh with 
/remove-lifecycle rotten - Close this issue with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Reopen this issue with 
/reopen - Mark this issue as fresh with 
/remove-lifecycle rotten - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
 lifecycle/staleis applied- After 30d of inactivity since
 lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
 lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
 /reopen- Mark this issue as fresh with
 /remove-lifecycle rotten- Offer to help out with Issue Triage
 Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.