kustomize
kustomize copied to clipboard
kustomize not able to reject a resource with name field during replacement
What happened?
In my base configuration I have a deployment object which has a http based health check probes. I've a requirement to kustomize this deployment object for one of my app (test-app2) & replace it with tcp health check.
We've a common replacement file shared by all the apps (/apps/common/internal/replacement.yaml), and there we are renaming part of the http health check endpoint with the app name. (Assume that most of our application have http health check only and only one or two app will have this special requirement of having a tcp health check).
In the test-app2 (where we want tcp health checks), we're first adding a patch to add tcp health check and another patch to remove http health check. But as soon as the http health check is removed by the patch, the field is referenced/referred by replacement.yaml and hence we see this error -
➜ kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target
In order to overcome this issue, we saw that in replacement object we can use reject to exclude a resource but somehow it is not working for us and we're still seeing the same above error.
- select:
kind: Deployment
reject:
- name: test-app2
fieldPaths:
- spec.template.spec.containers.[name=main].readinessProbe.httpGet.path
- spec.template.spec.containers.[name=main].livenessProbe.httpGet.path
As you can see that we want replacement to happen for all the deployment object excluding where name is test-app2 (where we want tcp health check), but we're seeing error.
What did you expect to happen?
We want reject section in replacement block to exclude the resource with the given name and only apply the replacement to all the selected resource except those mentioned in reject block.
How can we reproduce it (as minimally and precisely as possible)?
Git Repo - https://github.com/mdfaizsiddiqui/kustomize-bug/tree/main
Execute -
kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Expected output
Expected output should have test-app2 probes to only have tcp health check like -
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: test-app2
name: test-app2
spec:
replicas: 1
selector:
matchLabels:
service: test-app2
template:
metadata:
labels:
app: my_service
service: test-app2
spec:
containers:
- image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
imagePullPolicy: Always
livenessProbe:
initialDelaySeconds: 3
periodSeconds: 4
tcpSocket:
port: 8080
name: main
readinessProbe:
initialDelaySeconds: 3
periodSeconds: 4
tcpSocket:
port: 8080
Actual output
We're seeing error -
➜ kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target
Kustomize version
v5.0.2
Operating system
MacOS
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @mdfaizsiddiqui
Please add create: true option for replacements.
Like below example.
options:
delimiter: "/"
index: 1
create: true
Your deployments are missing path from fieldPath spec.template.spec.containers.[name=main].readinessProbe.httpGet.path. The replacements can find only until httpGet position.
create: true
This suggestion is doing the opposite of what we're looking for, we don't want httpGet to be the part of final output for test-app2 (check below output). The create: true property is populating the httpGet field which we're actually removing with this patch.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: test-app2
name: test-app2
spec:
replicas: 1
selector:
matchLabels:
service: test-app2
template:
metadata:
labels:
app: my_service
service: test-app2
spec:
containers:
- image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /test-app2
initialDelaySeconds: 3
periodSeconds: 4
tcpSocket:
port: 8080
name: main
readinessProbe:
httpGet:
path: /test-app2
initialDelaySeconds: 3
periodSeconds: 4
tcpSocket:
port: 8080
I understand what you want to do. I think you prefer to use the Components function for exec replacements.
example
# apps/common/internal/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
replacements:
- path: replacement.yaml
I've also run into this issue, and I think it's because the ordering of operations is difficult to ascertain. Do replacements get applied before/after a patch?
For example, if I have a source and want to target a fieldPath in selected resources, IMHO the reject operation should exclude that resource before the fieldPath is considered.
ie. kustomize build shouldn't generate an error when the fieldPath exists for selected resources but doesn't exist for rejected resources.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.