helm-dashboard icon indicating copy to clipboard operation
helm-dashboard copied to clipboard

Helm releases with different "storage" vs "install" namespace

Open mjnagel opened this issue 2 years ago • 4 comments

When managing a helm chart via Flux a HelmRelease custom resource is created with a given storageNamespace (foo), but could have a different targetNamespace (bar) which determines where the chart is installed (ref the spec here). The net result is that helm secrets, etc will be stored in the foo namespace, but bar is the "release namespace" where the chart is installed.

The logic handling of helm "lookups" by the dashboard seems to be problematic in this scenario. See the below example:

> helm ls -n foo
NAME                         	NAMESPACE     	REVISION	UPDATED                                	STATUS  	CHART                     	APP VERSION
bizbang                      	bar       	1       	2022-10-24 14:16:07.803938 -0600 MDT   	deployed	bizbang-1.45.0

From this helm command you can see that the namespace scope of the helm ls command is foo (my Flux storageNamespace), but the namespace listed here is bar (my Flux targetNamespace).

When I try to click on this in the dashboard I receive the error "Failed to get chart details Error: release: not found". Looking at my terminal I can see it is using the wrong namespace in the lookup:

WARN[1087] Failed command: [helm history bizbang --namespace bar --output json --max 18 --kube-context <removed>] 
WARN[1087] STDERR:
Error: release: not found

Using the foo namespace the helm history command would make this work.

It would be great to see this scenario supported/handled, although I understand if its seen as an unsupported edge case.

mjnagel avatar Oct 24 '22 22:10 mjnagel

I would like to understand the issue more. The way dashboard queries the initial list of charts is by running helm ls --all-namespaces --output json. In the results JSON, I see only one namespace field that I'm able to use for further lookups. There is no "storage namespace" information available.

I see that I could satisfy your use-case, say, with env variable HD_STORAGE_NAMESPACE, that would replace --all-namespaces if present. While I see how it works for the initial helm ls, how would it work for further commands? Does this approach mean there cannot be the same chart installed into 2 different namespaces under the same release name?

Making this env var of "force namespace parameter" is not too hard, I just want to make sure that would resolve your case.

undera avatar Oct 25 '22 13:10 undera

I would like to understand the issue more. The way dashboard queries the initial list of charts is by running helm ls --all-namespaces --output json. In the results JSON, I see only one namespace field that I'm able to use for further lookups. There is no "storage namespace" information available.

That's a great point - the "storage namespace" information isn't visible in this output. My understanding is that the concept of "storage namespace" might be unique to Flux (or at least not something "natively" available in Helm).

Your proposed solution of HD_STORAGE_NAMESPACE would work only if it is used for all helm commands as the namespace. For example, when running the helm history bizbang command you would need to use --namespace $HD_STORAGE_NAMESPACE rather than the namespace returned by helm ls. This would also assume that the "storage namespace" is the same for all helm charts, which may not be the case.

Does this approach mean there cannot be the same chart installed into 2 different namespaces under the same release name?

I'm not sure I totally follow this question, but you should still be able to install the same chart in 2 namespaces - you may just need to place them in 2 separate storage namespaces as well. This isn't something I've attempted to my knowledge, but I think it would work in the same storage namespaces even, provided you name the helm install differently.

After looking at it further the best (maybe only?) way I can think of to grab the "storage namespace" dynamically would be through helm secret lookups. If you were to run kubectl get secret -A --field-selector type=helm.sh/release.v1 this would display the correct namespace to use for further lookups. Example:

> k get secret -A --field-selector type=helm.sh/release.v1
NAMESPACE   NAME                                                  TYPE                 DATA   AGE
foo         sh.helm.release.v1.bizbang.v1                         helm.sh/release.v1   1      127m

The result of this query would inform other helm commands, i.e. to grab the history for the given helm release you would run helm history bizbang -n foo. I'm not sure if querying secrets is something you'd want to get into with the code but that might be the only way to handle this dynamically.

mjnagel avatar Oct 25 '22 16:10 mjnagel

Thanks for the clarifications. I would not like to parse raw Helm information from secrets, I'd rather rely on some standard queries via Helm.

If it's acceptable to say that with HD_STORAGE_NAMESPACE env variable the dashboard becomes limited to a specific namespace, that's easy to implement. Though it would mean the new chart install and upgrade will also be limited to that namespace. I'm not sure there won't be any conflicts between how Flux does it and original Helm does it.

Again, we can experiment, the cost of it is relatively low.

undera avatar Oct 25 '22 16:10 undera

I would not like to parse raw Helm information from secrets, I'd rather rely on some standard queries via Helm.

That's understandable. For clarification though you could use standard helm commands and lookups for everything except getting the initial list of helm releases/namespaces. Any time you need the list of helm installs plus their namespaces you would have to grab from the secret, everything else could be done with helm commands once you have the name and namespace.

The suggested HD_STORAGE_NAMESPACE env is probably an easy add that gives some functionality now. You're probably correct that the way Flux implements it's helm controller may deviate from a standard helm install in tiher ways too.

mjnagel avatar Oct 26 '22 12:10 mjnagel

Some improvements been done. Let's revisit this if necessary

undera avatar Nov 09 '22 18:11 undera