helm-controller
                                
                                 helm-controller copied to clipboard
                                
                                    helm-controller copied to clipboard
                            
                            
                            
                        Add support to refer to values from ConfigMap in other namespaces
This PR adds support to refer to values from ConfigMaps and Secrets in other namespaces to the valuesFrom field of the HelmRelease resource.
Motivation:
In our use case, the SRE team provisions Kubernetes cluster, providing some (global) ConfigMaps, readable by all tenants on the cluster. Those ConfigMaps contain cluster-specific information, which is needed by those tenants as input to their HelmReleases valuesFrom fields. And hopefully, once 253 is merged, we can also refer to those ConfigMaps from Kustomization resources.
We cannot (pre)provision those ConfigMaps into each tenant's namespace upfront, as tenant (namespaces) can come and go and we want to maintain the ConfigMaps in a single place (e.g. in the kube-public namespace).
Security Consideration
We use Kyverno policies to enforce that all HelmRelease and Kustomization resources have their serviceAccount field set, to make sure each tenant can only access ConfigMaps and Secrets that he has permission to read. However, if you have any objections regarding cross-namespace references to Secrets, then I will refactor this PR to only apply the proposed change to ConfigMaps.
Alternatively, we can add a label or annotation which must be present on ConfigMap or Secret resources to allow cross-namespace sharing (e.g. helm.fluxcd.io/shared: true) and we only accept cross-namespace resources if that flag is set. With this approach the owner of the resource (ConfigMap/Secret) has full control whether he want's to allow cross-namespace acccess or not. If that label is not present or invalid, access will be denied.
@alex-berger you can use Kyverno to generate a ConfigMap when a namespace is created. This allows you define the config data once (in the Kyverno ClusterPolicy) and it gets propagated to all your tenant namespaces. See Kyveron docs here: https://kyverno.io/docs/writing-policies/generate/
@stefanprodan I want to keep thing simple and Kyverno will not update the ConfigMap if it changes. I would rather prefer to add more flexiblity to the GitOps system (Flux) and not having to rely on Kyverno.
Kyverno will not update the ConfigMap if it changes
From Kyverno docs:
When synchronize is set to true, the generated resource is kept in-sync with the source resource (which can be defined as part of the policy or may be an existing resource)
@stefanprodan Kyverno is an implementation detail (orthogonal aspect) and I would rather prefer if we have first-class support for this in FluxCD, not having to rely on Kyverno or other policy engines. I will update the PR with the label based filter (opt-in) approach that I mentioned in the "Security Consideration" section and I will remove Secrets from this PR to further reduce the security-scope of this PR.
I updated the PR, but I did not yet remove support for cross-namespace Secret references, as my team told me that we really need them as well. However, with this commit resources are only shared if their owner explicitly opted-in by setting the helm.fluxcd.io/share-with annotation accordingly.
Examples:
A ConfigMap shared with all namespaces:
kind: ConfigMap
apiVersion: v1
metadata:
  name: public-cluster-info
  namespace: kube-public
  annotations:
     helm.fluxcd.io/share-with: "*"
data: {}
A ConfigMap shared with some specific namespaces:
kind: ConfigMap
apiVersion: v1
metadata:
  name: public-cluster-info
  namespace: kube-public
  annotations:
     helm.fluxcd.io/share-with: |-
        - kube-system
        - gloo-system
        - my-namespace
data: {}
If we want to add an additional layer of security, we could also feature-flag this, by adding a command line flag to the helm-controller to explicitly enable this feature.
I need to have a think about the impact this would have in general, and on other Flux component / toolkit types that allow cross-namespace sources.
One thing that must be changed if this were to be accepted is the domain of the annotation, it should be: helm.toolkit.fluxcd.io.
I assembled a table of all resource references fields in the current API, which might be helpful for analyzing the impact.
| Reference field | Cross-namespace | Used by | Security Risks | 
|---|---|---|---|
| dependsOn | yes | HelmRelease, Kustomization | low | 
| healthChecks | yes | Kustomization | low | 
| resources | yes | Receiver | low | 
| eventSources | yes | Alert | low | 
| sourceRef | yes | Kustomization, HelmRelease | low | 
| sourceRef | no | HelmChart | low | 
| secretRef | no | GitRepository, HelmRepository, Bucket, Decryption, KubeConfig, Provider, Receiver | high | 
| valuesFrom | no | HelmRelease (and with 253 maybe also Kustomization) | high (in case of references to Secrets), medium for ConfigMaps | 
Wanted to let you know that I have not forgotten about this PR, we are planning on doing a new MINOR release today and once done I will have a proper look to take this into consideration for the next MINOR release.
xref fluxcd/flux2#582 WRT security-model and multi-tenancy
I feel the sharing annotation is unnecessary.
The owner of the config/secret Namespace has full control of any RoleBinding that grants access for the configs to other Users/Groups from different Namespaces.
We do need to ensure that helm-controller uses an impersonated client to fetch valuesFrom. I'm not sure this is the case right now.
Hi, is this feature abandoned? For us is kinda killer point
Thanks
Hi, is this feature abandoned? For us is kinda killer point
Thanks
The general premise of Flux is that in a multi-tenant environment you will enable multi-tenancy lockdown mode. As @stefanprodan pointed out there are ways to multicast resources to tenant namespaces using 3rd-party components.
I'm closing the PR now as it hasn't received any attention for 2 years so people don't keep waiting for it. /cc @hiddeco