kustomize
kustomize copied to clipboard
[Question] Multiples repositories using same base (remote)
Hi guys, I'm coming from helm, so I'm used to values.yaml.
I've tested kustomize
and I found some know limitations or decisions to not support some features or some doubts,
I already know (I guess) the difference between using helm and kustomize. When I decided to switch from helm to kustomize I was looking to bring to developers less abstractions about the infra and give them some power to decide and change some resources, so for this, the manifests files should reside with the application source code to make their life "easier"
The current flow is: CI pipeline build the image on new commit, push to registry, cd to the path of the desired environment (kustomize build ../overlays/[environment] > result-[custom-branch-name].yaml
and than push this result yaml to the gitops manifests repository where argocd looking.
I had to add annotations and new resources common to all applications (at this moment two, but will grow to more than 50 similar), now I'm facing the pain to enter in each repository, create the new resource file, edit kustomization.yaml and let the CI process do build and generate the new result.yaml.
So I throught about importing remote files, at the applications source code, base kustomization.yaml had a import from a another base at remote repository.
<-application-source-code->/gitops/kustomize/base/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ssh://my-gitlab-instance-endpoint/kubernetes-kustomize-base-for-api.git
This solved the annotations common to all, but I faced another problem, from kubernetes-kustomize-base
is impossible to add some placeholders since there is not substitution at all and no way to know which application repository is importing it
Is there any solution for this?
I want to avoid to mantain a helm chart repository, bring developers more control but keep a remote base where I can force all applications to use new resource or add a annotation that will use the a dynamic value from each application repository.
The only way that I can imagine now is using some kind of .env
file at each overlay, update the the upstream remote base to use ${PLACEHOLDERS_VARIABLES}
and run a envsubst
command after the kustomize build. But I'm not reinventing the wheel?
@defaultbr: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We have a similarish use-case, we've got a repo for "common components" that can be added as resources by developers like so:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- base.yml
- https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/path/to/k8s-commons.git//ingress-common/${ENVIRONMENT}
We're using env variables in the k8s-commons
repo + envsubst when doing kustomize build. Targeting patches to these remote resources is a bit difficult though and currently we've been using just this kind of wildcard workarounds:
patchesJson6902:
- path: patch-common-ingress.yml
target:
name: .*-ingress
kind: Ingress
group: networking.k8s.io
version: v1
Would love to have some less hacky solution for this & the wildcard target patching isn't really ideal either 😄
This solved the annotations common to all, but I faced another problem, from kubernetes-kustomize-base is impossible to add some placeholders since there is not substitution at all and no way to know which application repository is importing it
Kustomize is designed to be template-free, i.e. not have any need for placeholders. You can overwrite values in the base kustomization via patches, replacements, etc, which should be enough for most use cases that require "placeholders".
What are you trying to do with these placeholders? Is there something you need to do that overlays with transformers cannot? For use cases like that (and https://github.com/kubernetes-sigs/kustomize/issues/4680#issuecomment-1182814931), the right way will probably be to write your own KRM extension, which is unfortunately still an alpha feature. If you can give some more detail about what needs to be replaced in the base kustomization though, we may be able to recommend a solution.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.