kustomize
kustomize copied to clipboard
Create a Common kustomize base
Hi wonderful kustomize community.
I was wondering if there is a recommended approach or if anyone has achieved a common base within Kustomize as well working with multiple overlays/bases.
Following tree structure:
├── common
│ ├── base
│ │ ├── kong-plugin.yaml
│ │ ├── kustomization.yaml
│ │ ├── sealed-aws-credentials.yaml
│ │ └── sealed-minio-credentials.yaml
│ └── overlays
│ └── prod
│ ├── kustomization.yaml
│ ├── sealed-aws-credentials.yaml
│ └── sealed-minio-credentials.yaml
└── xyz-service
├── base
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ └── namespace.yaml
└── overlays
├── dev
│ ├── env.local
│ └── kustomization.yaml
├── local
│ ├── env.local
│ └── kustomization.yaml
├── onprem
│ ├── env.local
│ └── kustomization.yaml
└── prod
├── env.local
└── kustomization.yaml
In this setup, I have configured xyz-service
base to use the common
base for managing some of our common resources. Now, things get tricky with the overlays for the xyz-service
. As of right now, all overlays are using the base for the specific service which is then creating resources from both xyz-service
and common
directories. This is loading default dev values for all overlays which I'm ok with. All except prod. I am trying to see if I can do some smart merging when it comes to prod.
In the xyz-service
this is my bases in kustomization.yaml in the prod
overlays
bases:
- ../../base/
- ../../../common/overlays/prod/
Right now I am faced with a namespace
conflict in some of the resources that I'm working through. Overall, just wanted to gauge the community and understand if this concept exists or if anyone else has implemented in the same manner.
@jrowinski3d: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Looking at this quickly it seems a bit more complicated than it needs to be, which will make maintenance a headache.
If sharing credentials between several services, what about pointing to credentials at URLs, one for dev
and one for prod
?
Eg.
xyz-service/overlays/prod/kustomization.yaml
resources:
- my-url/sealed-aws-credentials/prod
And in my experience it's better to surface configurations right in the service... It might be better to add your kong-plugin info to the xyz-service
base config to allow updates to be chosen deliberately (as opposed to surprising the service with an auto-update).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
If you're looking for input from other kustomize users to see what they have implemented, a better forum may be the kustomize channel in the kubernetes slack: https://kubernetes.slack.com/archives/C9A5ALABG
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.