kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

kustomize not able to reject a resource with name field during replacement

Open mdfaizsiddiqui opened this issue 2 years ago • 7 comments

What happened?

In my base configuration I have a deployment object which has a http based health check probes. I've a requirement to kustomize this deployment object for one of my app (test-app2) & replace it with tcp health check.

We've a common replacement file shared by all the apps (/apps/common/internal/replacement.yaml), and there we are renaming part of the http health check endpoint with the app name. (Assume that most of our application have http health check only and only one or two app will have this special requirement of having a tcp health check).

In the test-app2 (where we want tcp health checks), we're first adding a patch to add tcp health check and another patch to remove http health check. But as soon as the http health check is removed by the patch, the field is referenced/referred by replacement.yaml and hence we see this error -

➜  kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target

In order to overcome this issue, we saw that in replacement object we can use reject to exclude a resource but somehow it is not working for us and we're still seeing the same above error.

- select:
      kind: Deployment
    reject:
      - name: test-app2
    fieldPaths:
      - spec.template.spec.containers.[name=main].readinessProbe.httpGet.path
      - spec.template.spec.containers.[name=main].livenessProbe.httpGet.path

As you can see that we want replacement to happen for all the deployment object excluding where name is test-app2 (where we want tcp health check), but we're seeing error.

What did you expect to happen?

We want reject section in replacement block to exclude the resource with the given name and only apply the replacement to all the selected resource except those mentioned in reject block.

How can we reproduce it (as minimally and precisely as possible)?

Git Repo - https://github.com/mdfaizsiddiqui/kustomize-bug/tree/main

Execute -

kustomize build --load-restrictor LoadRestrictionsNone envs/dev

Expected output

Expected output should have test-app2 probes to only have tcp health check like -

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: test-app2
  name: test-app2
spec:
  replicas: 1
  selector:
    matchLabels:
      service: test-app2
  template:
    metadata:
      labels:
        app: my_service
        service: test-app2
    spec:
      containers:
      - image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
        imagePullPolicy: Always
        livenessProbe:
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080
        name: main
        readinessProbe:
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080

Actual output

We're seeing error -

➜  kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target

Kustomize version

v5.0.2

Operating system

MacOS

mdfaizsiddiqui avatar May 10 '23 18:05 mdfaizsiddiqui

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 10 '23 18:05 k8s-ci-robot

Hi @mdfaizsiddiqui

Please add create: true option for replacements.

Like below example.

    options:
      delimiter: "/"
      index: 1
      create: true

Your deployments are missing path from fieldPath spec.template.spec.containers.[name=main].readinessProbe.httpGet.path. The replacements can find only until httpGet position.

koba1t avatar May 17 '23 18:05 koba1t

create: true

This suggestion is doing the opposite of what we're looking for, we don't want httpGet to be the part of final output for test-app2 (check below output). The create: true property is populating the httpGet field which we're actually removing with this patch.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: test-app2
  name: test-app2
spec:
  replicas: 1
  selector:
    matchLabels:
      service: test-app2
  template:
    metadata:
      labels:
        app: my_service
        service: test-app2
    spec:
      containers:
      - image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /test-app2
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080
        name: main
        readinessProbe:
          httpGet:
            path: /test-app2
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080

mdfaizsiddiqui avatar May 22 '23 18:05 mdfaizsiddiqui

I understand what you want to do. I think you prefer to use the Components function for exec replacements.

example

# apps/common/internal/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

replacements:
  - path: replacement.yaml

koba1t avatar May 26 '23 16:05 koba1t

I've also run into this issue, and I think it's because the ordering of operations is difficult to ascertain. Do replacements get applied before/after a patch?

For example, if I have a source and want to target a fieldPath in selected resources, IMHO the reject operation should exclude that resource before the fieldPath is considered.

ie. kustomize build shouldn't generate an error when the fieldPath exists for selected resources but doesn't exist for rejected resources.

kphunter avatar Nov 16 '23 21:11 kphunter

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 16 '24 16:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 17 '24 17:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 16 '24 17:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 16 '24 17:04 k8s-ci-robot