controller can't recover from missing CRD
I'm running a controller with sigs.k8s.io/controller-runtime v0.15.1. I find that if CRDs are not applied when calling mgr.Start(), it won't recover by itself after applying the CRDs and continuously printing error:
if kind is a CRD, it should be installed before calling Start
To making things worse, /healthz passes that avoid a quick fix by liveness probe failure.
It doesn't happen when running with 0.13.
What's more strange is that I even run Manager.GetRESTMapper().RESTMapping in a loop before continue to run mgr.Start() but it doesn't help so the claim it should be installed before calling Start seems fake.
After about two mins, the controller restarts itself due to timed out waiting for cache to be synced and things start to work after that.
note: we are using Options.NewCache to filter on those CRDs.
/kind support
timed out waiting for cache to be synced
This timeout was delibaretely added, because before that, it would just silently not work. You can adjust this timeout to a very high value if that is what you want.
ibaretely added, because before that, it would just silently not work. You can adjust this timeout to a very high value if that is what you want.
that's not what I want. I want to find out a way for manager to recover quickly after CRD is applied instead of waiting for 2mins.
I would expect that to work but its definitely something we do not test. Feel free to debug and file a fix.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm hitting this issue too now.
In my case we have a manager that has multiple controllers. One controller is responsible for installing CRDs While another controller sets up a watch to a resource that is belonging to one of the CRDs that the other controller installs.
As such I'm seeing :
if kind is a CRD, it should be installed before calling Start
which is expected.
What is not expected is that once the CRD gets installed (by the first controller), the manager keeps retrying without finding it, and it eventually fails starting with timed out waiting for cache to be synced, in my case causing the operator to non-zero exit.
I also tried bumping the CacheSyncTimeout for the cache to be synced. But no matter how long the timeout is, the installed CRD is never detected.
Would we be able to reopen this? @vincepri @sbueringer
Thanks!
What controller-runtime version is this? The caches restmapper should reload the mapping
@alvaroaleman sigs.k8s.io/controller-runtime v0.17.4
Hm works for me (with CR v0.18.4 in Cluster API). It might depend on some details of which apiGroups / CRs exist and what exactly is added later.
Can you try to create a minimal controller + CRDs to reproduce?