kustomize
kustomize copied to clipboard
Consider using local credentials for helm to support private oci-based helm charts
When running kustomize build
and kustomize localize
, we use the local git
binary and configuration stored on the users' machine for fetching remote resources. This means that the user can access private git repositories, so long as they have the git configuration & authorization to do so.
When using the helm plugin, however, it looks like we set some global variables during execution that prevents helm from using the users' local credentials. This prevents the user from e.g. using private oci-based helm charts.
https://github.com/kubernetes-sigs/kustomize/pull/4614 is an example of what it would look like to avoid setting those variables for helm, and enable users to use private oci-based helm charts in kustomize (provided that they run helm registry login
on their own machine prior to running kustomize build
).
At first glance it seems inconsistent to me to be ok using the users' local credentials for git
but not helm
. That said, I unfortunately have no context on why we support private kustomize repositories but seem to be actively preventing it for private helm charts. I also have no context on what these extra variables do and if they have any side affects other than preventing private helm charts. If we have more information on the history of these decisions, it will help us understand if it makes sense to reconsider them.
If we do make this change, another step we could consider from there would be to have kustomize localize
support pulling down remote helm charts, to ease a workflow where someone is using a git-syncer such as Argo or Config Sync with kustomize + a private oci-based helm chart. The workflow for this use case would be something like:
- run
helm registry login
- run
kustomize localize
on their kustomization that includes a private oci-based helm chart - push the new localized kustomization directory to a private git repo
- configure your git-syncer to pull from your private git repo where you pushed your localized kustomization
/kind feature /triage under-consideration
Hi!
I believe this should be more considered as a bug than a feature.
The culprit of this would be HELM_CONFIG_HOME
which Kustomize sets by default to a tmpdir. As a workaround users may have a helm wrapper that unset this env var or configure it to it's normal/default value (helm wise) via configHome
in helmGlobals
. They would have to do something like that:
helmGlobals:
configHome: "/home/MY_USER_HERE/.config/helm"
There is supposed to be two files in this folder: repositories.yaml
and repositories.lock
. But even if the private OCI repository is not listed in those files it seems to work as long as those files exists, so I would suspect that if those files doesn't exist helm enter in a code path where it can't figure out the actual docker credential and file for a private OCI repository.
Although setting this environment variable is not motivated in the code, original commit or PR (https://github.com/kubernetes-sigs/kustomize/pull/3784), so I have no idea why Kustomize sets this in the first place. Kustomize also doesn't really populate those dir so I don't think there is any use to those. My best guess is that the charts were supposed to be pulled in this tmp dir in the first place but as you may know this doesn't work as kustomize (and helm actually) will pull the charts inside the local folder in a charts
subfolder.
So to fix the issue at hand I would have multiple propositions:
- Do not set the configHome by default to a tmp dir and let the user override it if they really want to. I have a PR that does already this: https://github.com/kubernetes-sigs/kustomize/pull/5434
- Try to populate the config dir in the tmp dir with sensible default so that Helm may go to an happier code path that would allow pulling charts from private OCI repos
- Make kustomize pull the charts into the tmpdir by default (which might have been the original intent?) and possibly have a different code path when you do
kustomize localize
to pull the charts in the local folder somehow and try to reuse on build if it already exists. This may be considered as part of a larger change though as it could involve larger changes.
Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the helm pull
command that kustomize
would do, which due to caching allows a proper kustomize build
to work as intended). I'd like to dump this for native kustomize
support ASAP 😅
for i in $(find -name 'kustomization.yaml' -printf "%h\n"); do
errormessage="first-run"
retries=0
while [ "${errormessage}" != "" ] && [ ${retries} -lt 5 ]
do
# Hacky extraction of the helm pull error message (https://unix.stackexchange.com/a/24151)
errormessage=$(kustomize build --enable-helm "${i}" 2>&1 | sed -n -e 's/^.*unable to run: //p' | cut -d"'" -f 2)
echo "${errormessage}"
${errormessage}
((retries++))
done
done
Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the
helm pull
command thatkustomize
would do).
It's also a blocker for us :(. As you can see above I have a PR fixing this in a minimalist way possible and also proposed alternative fixes that I would be happy to look at if the PR above is not considered... But yeah waiting for a kustomize reviewer/approver to take a look and validate the PR linked or some alternative approaches.
Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the
helm pull
command thatkustomize
would do).It's also a blocker for us :(. As you can see above I have a PR fixing this in a minimalist way possible and also proposed alternative fixes that I would be happy to look at if the PR above is not considered... But yeah waiting for a kubespray reviewer/approver to take a look and validate the PR linked or some alternative approaches.
Like @MrFreezeex mentions above, I see this more as a bug than a feature. IMO, using the local credentials of helm
is how it should work by default (or at least it should be possible to override it). I say this both because git
utilizes local credentials straight in kustomize
, but also because I think this is how most tools work overall. To give a few examples of the top of my head:
- NPM - supports dependencies from Git URL:s, which utilizes local
git
credentials - Go - Same here, uses local
git
credentials, be it HTTPS or SSH -
Azurerm and AzureAD Terraform provider - Uses the Azure CLI (
az
) by default if nothing else is supplied - GitHub Terraform provider - Uses the GitHub CLI (
gh
) by default if nothing else is supplied
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.