kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

Panic: runtime error: invalid memory address or nil pointer dereference

Open AxelAlvarsson opened this issue 2 years ago • 5 comments

What happened?

Using:

  • Mac Darwin Kernel Version 23.0.0 arm64
  • Kustomize version v5.2.1

Running
kustomize build --load-restrictor LoadRestrictionsNone --enable-alpha-plugins --enable-exec .

Causes error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x40 pc=0x104618b54]

goroutine 1 [running]:
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).Content(...)
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:707
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).getMapFieldValue(0x14002260b08?, {0x10476bfb1?, 0x7?})
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:420 +0x54
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).GetApiVersion(...)
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:402
sigs.k8s.io/kustomize/kyaml/resid.GvkFromNode(0x140017648b8?)
	sigs.k8s.io/kustomize/kyaml/resid/gvk.go:32 +0x40
sigs.k8s.io/kustomize/api/resource.(*Resource).GetGvk(...)
	sigs.k8s.io/kustomize/api/resource/resource.go:57
sigs.k8s.io/kustomize/api/resource.(*Resource).CurId(0x1400044e960)
	sigs.k8s.io/kustomize/api/resource/resource.go:449 +0x48
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetMatchingResourcesByAnyId(0x14002260ee8?, 0x14001c81140)
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:184 +0xac
sigs.k8s.io/kustomize/api/resmap.demandOneMatch(0x14002260ff8, {{{0x140016a08f8, 0x5}, {0x140016a08fe, 0x2}, {0x140016a0920, 0x7}, 0x0}, {0x140021f8ec0, 0x19}, ...}, ...)
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:227 +0xc8
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetById(0x14002220140?, {{{0x140016a08f8, 0x5}, {0x140016a08fe, 0x2}, {0x140016a0920, 0x7}, 0x0}, {0x140021f8ec0, 0x19}, ...})
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:214 +0x9c
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).transformStrategicMerge(0xf?, {0x104a4e998, 0x1400000f2c0})
	sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:112 +0x2d0
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).Transform(0x1400000f2c0?, {0x104a4e998?, 0x1400000f2c0?})
	sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:87 +0x2c
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0x140021a14a0?, {0x104a4e998, 0x1400000f2c0})
	sigs.k8s.io/kustomize/api/internal/target/multitransformer.go:30 +0x88
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
	sigs.k8s.io/kustomize/api/internal/accumulator/resaccumulator.go:141
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0x1400007eeb0, 0x1400007bf80)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:343 +0x1ac
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0x1400007eeb0, 0x140002a2928?)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:237 +0x318
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0x0?)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:194 +0x10c
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0x1400007eeb0)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:135 +0x68
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:126
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0x14002261c98, {0x104a49758, 0x104fe5840}, {0x16bd6f88a, 0x1})
	sigs.k8s.io/kustomize/api/krusty/kustomizer.go:90 +0x248
sigs.k8s.io/kustomize/kustomize/v5/commands/build.NewCmdBuild.func1(0x140001d6300?, {0x14000048ba0?, 0x4?, 0x104768ff8?})
	sigs.k8s.io/kustomize/kustomize/v5/commands/build/build.go:82 +0x15c
github.com/spf13/cobra.(*Command).execute(0x14000270600, {0x14000048b40, 0x6, 0x6})
	github.com/spf13/[email protected]/command.go:940 +0x658
github.com/spf13/cobra.(*Command).ExecuteC(0x14000270000)
	github.com/spf13/[email protected]/command.go:1068 +0x320
github.com/spf13/cobra.(*Command).Execute(0x104ef95a8?)
	github.com/spf13/[email protected]/command.go:992 +0x1c
main.main()
	sigs.k8s.io/kustomize/kustomize/v5/main.go:14 +0x20

(Naming changed in some of the following)

Kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: namespace-name

resources:
  - ../../base

patches:
  - path: resource-patch.yaml
  - path: delete-some-worker.yaml

delete-some-worker.yaml file

$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: first-worker
---
$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: second-worker
---
$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: third-worker

The base some-worker.yaml definition is fine and works in any other context so that is not the issue.

What did you expect to happen?

Extpecting a successfull manifest output.

How can we reproduce it (as minimally and precisely as possible)?

Use the same format of deletions as with the delete-some-worker.yaml file.

TESTED THIS:

  • If I comment out any two of the three definitions in the delete-some-worker.yaml file it works.
  • Same as if I split them out to their own files, it works.

My current guess is that several $patch: delete definitions with no more than metadata name as difference is not an intentional panic, instead an uncovered usecase?

Expected output

No response

Actual output

No response

Kustomize version

v5.2.1

Operating system

MacOS

AxelAlvarsson avatar Dec 04 '23 15:12 AxelAlvarsson

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 04 '23 15:12 k8s-ci-robot

I have the same issue with multiple $patch: delete patches in the same file which prevents me to convert the patchesStrategicMerge to patches Seems related to #5049

yogeek avatar Jan 03 '24 10:01 yogeek

I ran into this problem too when I had multiple $patch: delete in a single patch file.

For example:

---
$patch: delete
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: XXXX
  namespace: XXXX
---
$patch: delete
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: XXXX
  namespace: XXXX

Breaking them out into their own patch .yaml file works tho and so does adding individual patches to the kustomization.yaml. For example:

patches:
  - patch: |-
      $patch: delete
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: XXXX
        namespace: XXXX
  - patch: |-
      $patch: delete
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: XXXX
        namespace: XXXX

Like yogeek said I think this is intentional based on https://github.com/kubernetes-sigs/kustomize/issues/5049#issuecomment-1440604403

CyDickey-msr avatar Feb 22 '24 22:02 CyDickey-msr

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 22 '24 22:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 21 '24 23:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jul 21 '24 23:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 21 '24 23:07 k8s-ci-robot

Still getting this panic with more than one $patch: delete in v5.4.3.

jdmarble avatar Sep 10 '24 18:09 jdmarble