kustomize
kustomize copied to clipboard
Regex support in ReplacementTransformer broken.
What happened?
Hello,
Up to kustomize version 4.5.7 I was able to use ReplacementTransformer to actually find kinds based on regexes inside fieldPaths field like so:
...
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
replacements:
- source:
namespace: myservice
kind: ConfigMap
name: myservice-config
fieldPath: data.MYSERVICE_VERSION
targets:
- select:
kind: StatefulSet
fieldPaths: &fieldPaths1
- spec.template.spec.containers.[name=myservice-*].image
- spec.template.spec.initContainers.[name=myservice-*].image
options: &options
delimiter: ":"
index: 1
- select:
kind: Deployment
fieldPaths: *fieldPaths1
options: *options
...
Here I'm searching for all StatefulSet and Deployment containers with name that starts with myservice-*
and replace it's image tag with data from "myservice-config" ConfigMap. To do so I'm splitting image fieldPath result using delimeter : and picking tag part of string using index: 1 option.
This is no longer possible since version 5.0.0 and I believe I found code that is responsible for this change:
https://github.com/kubernetes-sigs/kustomize/blob/master/api/filters/replacement/replacement.go#L196
if len(targetFields) == 0 {
return errors.Errorf(fieldRetrievalError(fp, createKind != 0))
}
This prevents any kind of searches using regexes.
What did you expect to happen?
I expected to actually render kubernetes manifests. Instead all I get is the following error:
./kustomize build --enable-helm /home/mtrojanowski/Projects/myProject/deployments/environments/dev
Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/home/mtrojanowski/Projects/myProject/deployments/components/myservice-version': unable to find field \"spec.template.spec.initContainers.[name=myservice-*].image\" in replacement target"
How can we reproduce it (as minimally and precisely as possible)?
.
├── components
│ └── myservice-version
│ └── kustomization.yaml
├── environments
│ └── dev
│ ├── kustomization.yaml
│ └── overlay
│ ├── config.properties
│ └── kustomization.yaml
└── myserviceap
├── kustomization.yaml
└── resources.yaml
components/myservice-version/kustomization.yaml:
# components/myservice-version/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
replacements:
- source:
namespace: myservice
kind: ConfigMap
name: myservice-config
fieldPath: data.MYSERVICE_VERSION
targets:
- select:
kind: StatefulSet
fieldPaths: &fieldPaths1
- spec.template.spec.containers.[name=myservice-*].image
- spec.template.spec.initContainers.[name=myservice-*].image
options: &options
delimiter: ":"
index: 1
- select:
kind: Deployment
fieldPaths: *fieldPaths1
options: *options
environments/dev/kustomization.yaml:
# environments/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: myservice
resources:
- ./overlay
- ../../myserviceap
components:
- ../../components/myservice-version
environments/dev/overlay/kustomization.yaml:
# environments/dev/overlay/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: myservice
configMapGenerator:
- name: myservice-config
envs:
- config.properties
generatorOptions:
disableNameSuffixHash: true
labels:
type: generated
annotations:
note: generated
environments/dev/overlay/config.properties:
MYSERVICE_VERSION=n1287
myserviceap/kustomization.yaml:
# myserviceap/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
myserviceap/resources.yaml:
# myserviceap/resources.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice-alerting
labels:
app: myservice-alerting
spec:
replicas: 1
selector:
matchLabels:
app: myservice-alerting
template:
metadata:
labels:
app: myservice-alerting
spec:
containers:
- name: myservice-alerting
image: myservice-alerting
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
startupProbe:
httpGet:
path: /health
port: 80
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /health
port: 80
periodSeconds: 30
failureThreshold: 5
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myservice-collection
labels:
app: myservice-collection
spec:
replicas: 1
serviceName: myservice-collection
selector:
matchLabels:
app: myservice-collection
template:
metadata:
labels:
app: myservice-collection
spec:
containers:
- name: myservice-collection
image: myservice-collection
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
startupProbe:
httpGet:
path: /api/health
port: 8000
periodSeconds: 10
failureThreshold: 30
livenessProbe:
httpGet:
path: /api/health
port: 8000
periodSeconds: 30
failureThreshold: 5
- name: collection-elasticsearch
image: myservice-collection-elasticsearch
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
- name: myservice-collection-engine
image: myservice-collection-engine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myservice-etcd
labels:
app: myservice-etcd
annotations:
myservice-backup: true
spec:
replicas: 1
serviceName: myservice-etcd
selector:
matchLabels:
app: myservice-etcd
template:
metadata:
labels:
app: myservice-etcd
spec:
containers:
- name: etcd
image: myservice-etcd
imagePullPolicy: IfNotPresent
- name: etcd-backup
image: myservice-backup
imagePullPolicy: Always
ports:
- containerPort: 8080
---
piVersion: apps/v1
kind: Deployment
metadata:
name: myservice-prometheus
labels:
app: myservice-prometheus
annotations:
configmap.reloader.stakater.com/reload: "myservice-prometheus"
spec:
replicas: 1
selector:
matchLabels:
app: myservice-prometheus
template:
metadata:
labels:
app: myservice-prometheus
spec:
serviceAccount: prom-sd
serviceAccountName: prom-sd
containers:
- name: prometheus
image: myservice-prometheus
imagePullPolicy: IfNotPresent
Expected output
kustomize version {Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
kustomize build --enable-helm environments/dev
apiVersion: v1
data:
MYSERVICE_VERSION: n1287
kind: ConfigMap
metadata:
annotations:
note: generated
labels:
type: generated
name: myservice-config
namespace: myservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myservice-alerting
name: myservice-alerting
namespace: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice-alerting
template:
metadata:
labels:
app: myservice-alerting
spec:
containers:
- image: myservice-alerting:n1287
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 80
periodSeconds: 30
name: myservice-alerting
ports:
- containerPort: 80
startupProbe:
failureThreshold: 30
httpGet:
path: /health
port: 80
periodSeconds: 10
---
kind: Deployment
metadata:
annotations:
configmap.reloader.stakater.com/reload: myservice-prometheus
labels:
app: myservice-prometheus
name: myservice-prometheus
namespace: myservice
piVersion: apps/v1
spec:
replicas: 1
selector:
matchLabels:
app: myservice-prometheus
template:
metadata:
labels:
app: myservice-prometheus
spec:
containers:
- image: myservice-prometheus
imagePullPolicy: IfNotPresent
name: prometheus
serviceAccount: prom-sd
serviceAccountName: prom-sd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: myservice-collection
name: myservice-collection
namespace: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice-collection
serviceName: myservice-collection
template:
metadata:
labels:
app: myservice-collection
spec:
containers:
- image: myservice-collection:n1287
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: 8000
periodSeconds: 30
name: myservice-collection
ports:
- containerPort: 8000
startupProbe:
failureThreshold: 30
httpGet:
path: /api/health
port: 8000
periodSeconds: 10
- image: myservice-collection-elasticsearch
imagePullPolicy: IfNotPresent
name: collection-elasticsearch
ports:
- containerPort: 9200
- image: myservice-collection-engine:n1287
imagePullPolicy: IfNotPresent
name: myservice-collection-engine
ports:
- containerPort: 8001
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
myservice-backup: "true"
labels:
app: myservice-etcd
name: myservice-etcd
namespace: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice-etcd
serviceName: myservice-etcd
template:
metadata:
labels:
app: myservice-etcd
spec:
containers:
- image: myservice-etcd
imagePullPolicy: IfNotPresent
name: etcd
- image: myservice-backup
imagePullPolicy: Always
name: etcd-backup
ports:
- containerPort: 8080
Actual output
kustomize version v5.0.1
kustomize build --enable-helm environments/dev
Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/home/mtrojanowski/Projects/myProject/deployments/components/myservice-version': unable to find field \"spec.template.spec.initContainers.[name=myservice-*].image\" in replacement target"
Kustomize version
v5.0.1
Operating system
None
I'm facing very similar issues in my projects. If behavior will not be recovered then I will have to rewrite a lot of Kustomize specs I'm using now :(
Similar, but different, with 4.x to 5.x I'm seeing,
Error: unable to find field "spec.template.metadata.labels.[app.kubernetes.io/version]" in replacement target
for a replacement template like
source:
kind: Rollout
name: XXX
fieldPath: spec.template.spec.containers.0.image
options:
delimiter: ':'
index: 1
targets:
- select:
kind: Namespace
fieldPaths:
- metadata.labels.[app.kubernetes.io/version]
- select:
namespace: XXX
fieldPaths:
- metadata.labels.[app.kubernetes.io/version]
- spec.template.metadata.labels.[app.kubernetes.io/version]
- spec.template.metadata.labels.[tags.datadoghq.com/version]
/assign
I am also experiencing this issue with replacements.
The previous behavior on create: false was to ignore if missing. Now, create: false fails if missing.
This is the the config that I am using.
source:
kind: ConfigMap
name: my-config-map
fieldPath: data.my-field
targets:
- select:
apiVersion: storage.k8s.io
kind: StorageClass
fieldPaths:
- parameters.network
options:
create: false
IF the following exists, then the replacement works.
apiVersion: storage.k8s.io/v1
kind: StorageClase
metadata:
name: my-storage-class
parameters:
network: <REPLACE_ME>
But if the resource does not contain parameters.network, then kustomize fails with this error:
Error: unable to render the source configs in /path/to/directory: failed to run kustomize build in /path/to/directory, stdout: : Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/path/to/directory/components': unable to find field "parameters.network" in replacement target
The regex issue is interesting and I will try to find time to think about it.
But -
The previous behavior on create: false was to ignore if missing. Now, create: false fails if missing.
That was an intentional change that we announced in the release notes. Because it was intentional and released with a major bump, I don't think we would consider that an issue or a regression.
@natasha41575 any thoughts about what I ran across, which wasn't using a regex, but had the same error as originally reported?
I'm having the same issue as @m-trojanowski with replacements no longer working after 4.5.7
@natasha41575
It would be really great to have such feature. Before 5.0.0 I was able to cycle trough all my manifests and alter only ones that I wanted to, but now I'm quite limited because of this. Maybe there is a chance to introduce some extra flag to alter this behavior?
I can offer my time as well to help to implement it if needed.
Cheers!
I'm having problems also trying to use replacements on targets with wildcards:
replacements:
- source:
kind: GitRepository
fieldPath: metadata.name
targets:
- select:
kind: HelmRelease
fieldPaths:
- spec.valuesFrom.*.name
Hi @natasha41575. I am facing the same issues with the new create: false behavior, which now fails when the resource is missing. This completely breaks my current workflow of using multiple overlays to create preview environments, so I have to stick with v4.
It would be great to have an option/flag to allow using the previous behavior for specific replacements.
Hi @natasha41575, Do you know if there is any plan to address this issue? My current specs are not rendering properly because of change reported here and because of that I can't migrate to Kustomize v5 :(
I'm experiencing this regex issue with kustomize 5.1.0. Would love to see it fixed.
Hey any update on this @natasha41575?
May there should be an additional option continueOnFailure: true or skip: true which covers the case that there are possible replacement targets which don't have and don't need the value. I'm heavily using the labelSelector to not manage all resource by name. Unfortunatly I'm now forced to manage the restrict list with the resource which are failiing currently
Hi @natasha41575 , I've added #5280 draft so we can continue discussion on how to approach this issue and whether it's even possible. Cheers!
May there should be an additional option continueOnFailure: true or skip: true which covers the case that there are possible replacement targets which don't have and don't need the value. I'm heavily using the labelSelector to not manage all resource by name. Unfortunatly I'm now forced to manage the restrict list with the resource which are failiing currently
I second this. As currently the 5.x versions rendered my whole project useless. I use kustomize together with ArgoCD and have set up a project consisting of 10+ Openshift namespaces being provided with 100+ components (dc, svc, rt, you name it) each with the Openshift templates being generated by kustomize.
Components may share certain parameters, or a parameter may be used by only one component. The parameters are provided by a key-value-template and are being replaced with replacements blocks. This mechanism is now completely broken and I will have to rewrite the whole thing and use rejects like crazy or split up the project into hundreds of small files which will result in a complete mess.
@m-trojanowski thanks for your proposal I hope it will be taken into consideration I can live with an additional option in the replacements block, but would rather propose a commandline option to omit the behaviour to error out on not found targets.
@KlausMandola as I have very similar issue in my projects right now I'm planning to migrate Kustomize to Jsonnet. It should be not very complicated as yaml can be easily transformed to json and the simplest use of Jsonnet is about to use just json files.
We are also badly affected by this change (see my other comment on the PR)
I have opened a formal feature request for allowing users to opt for the pre-5.0.0 behavior with a flag : https://github.com/kubernetes-sigs/kustomize/issues/5440
@renaudguerin please take a look also on this PR https://github.com/kubernetes-sigs/kustomize/pull/5280 which should also resolve this issue but for some reason nobody is willing to review it :(
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hi, is there any update on this issue?
i am in kustomize version v5.4.3 and still hitting this issue
Error: unable to find field "spec.xx.xxx" in replacement target
Yep - hitting this too. Just tried the update from v4.5.7, which ignored non-matches, to v5.5.0, which errors. New behavior breaks a bunch of stuff on my end 😬. Will try to get around it!
our workaround is to spin up a Docker image and work inside that
#
# Run an older version of Kustomize (4.5.7)
#
# https://hub.docker.com/r/line/kubectl-kustomize/tags
#
# docker pull line/kubectl-kustomize:1.26.1-4.5.7
#
docker run --rm \
-v "/Users/$USER/.kube:/root/.kube:ro" \
-v "$PWD/k8s:/k8s:ro" \
-ti line/kubectl-kustomize:1.26.1-4.5.7
# install helm
apk --no-cache add curl bash git
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 755 get_helm.sh
VERIFY_CHECKSUM=false ./get_helm.sh
# then you can run normal kustomize commands
@natasha41575 , @ciaccotaco, @RomanOrlovskiy #5778 adds support for ignoring missing fields which provides the previous behavior in an intentional way.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale