kustomize
kustomize copied to clipboard
Improve documentation for configuring generators to work with CRs
Describe the bug
I'm using Kyma to define serverless (i.e. lambda) functions. secretGenerator nor configMapGenerator work as they are not Deployments
Files that can reproduce the issue
kustomize.yaml
resources:
- fcn-search-orders2.yaml
namespace: dev
configMapGenerator:
- literals:
- ODATA_ORDERS_URL=https://something.commerce.ondemand.com/odata2webservices/Order
name: plumbing
secretGenerator:
- name: credentials
envs:
- secret.properties
- literals:
- db-password=123456
name: sl-demo-app
type: Opaque
fcn-search-orders2.yaml
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
creationTimestamp: null
name: search-orders
spec:
deps: ""
env:
- name: ODATA_ORDERS_URL
valueFrom:
configMapKeyRef:
key: ODATA_ORDERS_URL
name: plumbing
- name: ODATA_USER
valueFrom:
secretKeyRef:
key: ODATA_USER
name: credentials
- name: ODATA_PASSWORD
valueFrom:
secretKeyRef:
key: ODATA_PASSWORD
name: credentials
runtime: nodejs12
source: "not important for the bug report"
Actual output
apiVersion: v1
data:
ODATA_ORDERS_URL: https://something.commerce.ondemand.com/odata2webservices/Order
kind: ConfigMap
metadata:
name: plumbing-56g66cf2dt
namespace: dev
---
apiVersion: v1
data:
ODATA_PASSWORD: bmltZGE=
ODATA_USER: cGhpbDI=
kind: Secret
metadata:
name: credentials-75cb7f9m64
namespace: dev
type: Opaque
---
apiVersion: v1
data:
db-password: MTIzNDU2
kind: Secret
metadata:
name: sl-demo-app-kt9947h4gc
namespace: dev
type: Opaque
---
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
creationTimestamp: null
name: search-orders
namespace: dev
spec:
deps: ""
env:
- name: ODATA_ORDERS_URL
valueFrom:
configMapKeyRef:
key: ODATA_ORDERS_URL
name: plumbing
- name: ODATA_USER
valueFrom:
secretKeyRef:
key: ODATA_USER
name: credentials
- name: ODATA_PASSWORD
valueFrom:
secretKeyRef:
key: ODATA_PASSWORD
name: credentials
runtime: nodejs12
source: not important for the bug report
It was expected that the secretKeyRef and configkeyRef would leverage the newly created secret and config map. However now, the hash is not appended.
Can kustomize just check for a secretKeyRef construct ? It looks like Deployment is hard coded
Kustomize version
v4.1.3
Platform
Linux & Windows
Additional context
https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/crd/README.md
try the Kustomize : configurations
field?
It seems to me that the crds
field could also be used to get the desired behavior, provided you insert the required annotations into the schema: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/crds/
@monopole answers is correct. Please note that you have to use Kustomize and not kubectl apply -k
as the latest kubectl
does not understand configurations
. I did not try the other solutions as I did not want to modify an existing CRD that I do not control. I'm assuming that when Kyma upgrades those definitions might be modified too. This would break the annotations. My assumption could be wrong.
I'm not closing this as I think the documentation should be modified to at least hint that name reference only works out of the box for some kinds of CRD and then link to how to configure other kinds.
/retitle Improve documentation for configuring generators to work with CRs /kind documentation /help
Always a good idea to make the docs better! If you have time to help, please note that their code is here: https://github.com/kubernetes-sigs/cli-experimental.
I did not want to modify an existing CRD that I do not control. I'm assuming that when Kyma upgrades those definitions might be modified too. This would break the annotations
In case it isn't clear, to provide the CRD to Kustomize, you'd be committing a copy of it alongside your Kustomization, but it would only be used by Kustomize internally and would not appear in the output. So you wouldn't be affecting your cluster by using this solution. If Kyma is versioning Function normally, you'd simply need to update your committed copy of the CRD at the same time you update your Function resources to a new APIVersion.
@KnVerey: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
/retitle Improve documentation for configuring generators to work with CRs /kind documentation /help
Always a good idea to make the docs better! If you have time to help, please note that their code is here: https://github.com/kubernetes-sigs/cli-experimental.
I did not want to modify an existing CRD that I do not control. I'm assuming that when Kyma upgrades those definitions might be modified too. This would break the annotations
In case it isn't clear, to provide the CRD to Kustomize, you'd be committing a copy of it alongside your Kustomization, but it would only be used by Kustomize internally and would not appear in the output. So you wouldn't be affecting your cluster by using this solution. If Kyma is versioning Function normally, you'd simply need to update your committed copy of the CRD at the same time you update your Function resources to a new APIVersion.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.