kustomize
kustomize copied to clipboard
OCI support in Helm builtin
This is currently WIP.
fixes: #4381
Questions for owner: @natasha41575 @yuwenma @KnVerey It seems like @monopole has added environment variables to put the data and cache for HELM into a temp directory. Do we know why that is? The reason why this is undesirable with OCI is that most of the OCI repos are protected with auth. Auth is possible if one does authentication with Helm ahead of time. Example:
helm registry login -u _json_key --password-stdin https://us-central1-docker.pkg.dev
After that is run I am able to do things like:
helmCharts:
- name: chart1
version: 0.1.0
repo: oci://us-central1-docker.pkg.dev/mikebz-ex1/charts
valuesInline:
nameOverride: foobar
and if the configuration is not in the temp location I can easily get the chart.
One of the ideas is to include some sort of an authentication in kustomization.yaml, but I think that would be undesirable.
Things that are left to figure out:
- [ ] create a publicly available registry or something that can be used to ensure that the unit test works.
- [ ] figure out if unsetting the HELM* variables will have a security or other undesirable impact.
- [ ] is there an issue updating the default Helm download to 3.8+? OCI is enabled by default in those builds
@mikebz: This PR has multiple commits, and the default merge method is: merge. You can request commits to be squashed using the label: tide/merge-method-squash
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @mikebz. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: mikebz
To complete the pull request process, please assign natasha41575 after the PR has been reviewed.
You can assign the PR to them by writing /assign @natasha41575
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/label tide/merge-method-squash
This PR is probably more appropriate to make in this repo: https://github.com/kubernetes-sigs/krm-functions-registry/tree/main/krm-functions/sig-cli/render-helm-chart
On the kustomize side, we have paused development on the helm plugin until we are able to migrate to the KRM function. The long term plan is here: https://github.com/kubernetes-sigs/kustomize/issues/4401 and work is in progress on this front
I am happy to add the work there as well, I would say there is a short term and longer term goals here. (1) - we should unblock customers who will use the builtin, (2) - we should migrate.
/hold
Having the test infra will be great, but it's also been pointed out to me that this is a hotfix for a particular use case. I'm designing a more general solution that we will put into the KRM function, after which we can see if we want to put it in the builtin plugin. But, this is a more substantial feature than I'd realized when I first saw this PR so I believe a better course of action would be to prioritize migration (which I plan to do this quarter).
Again, the helm builtin is slated for deprecation and eventual removal, so our support on it is extremely limited.
An update from the infra folks:
The k8s infra doesn't yet properly support Artifact Registry, and they are hesitant to provision it for us ad-hoc for a single-cluster use case that the kustomize maintainers are skeptical about. It is more realistic for them to support it for us for the KRM functions registry a few months from now.
Thanks for the update. One of the things to consider is that the current tests are hitting 3rd party chart registries. Maybe we can create one for functions and still use it for built in and the function. There is no precedent right now that all of the test repositories are CNCF owned.
/hold
Having the test infra will be great, but it's also been pointed out to me that this is a hotfix for a particular use case. I'm designing a more general solution that we will put into the KRM function, after which we can see if we want to put it in the builtin plugin. But, this is a more substantial feature than I'd realized when I first saw this PR so I believe a better course of action would be to prioritize migration (which I plan to do this quarter).
Again, the helm builtin is slated for deprecation and eventual removal, so our support on it is extremely limited.
Is it truly unacceptable to accept this patch as a stop-gap measure to unblock the user base of kustomize from using OCI registries?
I agree the KRM function will be better long-term, but as it stands, the user base has to run external commands to pull OCI charts and make them available to kustomize locally before its invocation. Seeing as the helm team broke semver compliance, it stands to reason a hotfix should be acceptable to address the upstream products bad behavior.
KRM functions aren't yet a viable replacement for built-in plugins. For instance, of the GitOps tools that support kustomize, which ones support KRM functions? Are there examples of executing KRM functions in CI?
There was a suggestion to fork kustomize here: https://github.com/argoproj/argo-cd/issues/5553#issuecomment-1028529945
Since there are uses of the helm built-in plugin, I recommend continuing to support it until KRM function execution obstacles are overcome.
Example user: https://twitter.com/todaywasawesome/status/1532078601448787968
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Waiting for closed this issue 😁
I am guessing this addition is unwelcome. We have already created a workaround in ConfigSync, so my interest in lobbying for this more has waned :)
I am guessing this addition is unwelcome. We have already created a workaround in ConfigSync, so my interest in lobbying for this more has waned :)
i hope this issue can solving, because i need pull chart from oci repository
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@olfway: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@mikebz: Reopened this PR.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/retest
@mikebz: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test
message.
In response to this:
/retest
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@mikebz bump?
/reopen
@mikebz: Reopened this PR.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@mikebz: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.