controller-runtime
controller-runtime copied to clipboard
pkg/source Kind WaitForSync panics if called before Start
I got a panic from https://github.com/kubernetes-sigs/controller-runtime/blob/adc9fa96250106ebf0a3c589aa64e085e90870fd/pkg/source/source.go#L177 . There seems to be an assumption that https://github.com/kubernetes-sigs/controller-runtime/blob/adc9fa96250106ebf0a3c589aa64e085e90870fd/pkg/source/source.go#L121 runs earlier.
I did not notice documentation that Start has to complete before WaitForSync is called. Indeed, I think there should not be such a requirement.
Please provide the panic stack and describe your usage.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13fff9c]
goroutine 266 [running]:
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).WaitForSync(0xc0008eda70, {0x1aff2c0, 0xc0479f02a0})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:177 +0xbc
sigs.k8s.io/controller-runtime/pkg/source.(*kindWithCache).WaitForSync(0x1aff2f8, {0x1aff2c0, 0xc0479f02a0})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:86 +0x25
myrepo/controllers.(*dynamicDeviceInformer).WaitForSync(...)
/workspace/controllers/device-materializer.go:284
My usage is a controller with a dynamic set of cache&informer. They all are focused on the same type of object, but each has a distinct label selector.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.