controller-runtime
controller-runtime copied to clipboard
Consistently Seeing Reflector Watch Errors on Controller Shutdown
During controller shutdown, we consistently see errors that look like
logger.go:146: 2024-03-22T21:35:31.707Z INFO cache/reflector.go:462 pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:229: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
This occurs extremely consistently during shutdown and I wouldn't expect that we would see something that looks like an error that comes through an INFO/WARN path. From looking at the reflector code, it seems like this "error" is coming from this line. Is there a way to ensure that the runnable shutdown doesn't fire this error every time that we shutdown?
As an example, these "errors" are coming in our Karpenter E2E testing here: https://github.com/aws/karpenter-provider-aws/actions/runs/8396432349/job/22997824261
/kind support
@jonathan-innis I meet the same problem,
will it causes memory leaks??
If the controller is shutting down, I don't think it's going to cause memory leaks. From looking through the code, it just looks like spurious error logging from the reflector as all the context cancels are happening, but I'm imagining there's a more graceful way to shut the reflector down so we don't see this.
@laihezhao Your error also looks quite different from mine. Yours appears to be caused by 500s occurring somewhere on the apiserver.
@troy0820 Got any thoughts here on how this can be improved? Ideally, we wouldn't be seeing errors for what appears to be a graceful shutdown for controller-runtime.
@jonathan-innis I am going to investigate this but this looks like it can be a bug, so I will label the issue with it so we can triage it a little better.
/kind bug
I have seen this issue in our controllers happen often, it happens when a Custom Resource Definition for a resource that was prior accessed by the controller gets deleted (no matter whether the CR is controlled by the controller, or just accessed). This seems like an unnecessary behaviour; watching a resource that is already gone from the cluster seems not to be necessary, and definitely should not be creating logs that cannot be otherwise ignored.
Generally for resources that we did not access often we implemented a workaround of using unstructured to not set up unnecessary watch on them, but it would be nice to see that this issue is somehow addressed.
What could help is an option to disable the reflector watch.
Generally for resources that we did not access often we implemented a workaround of using unstructured to not set up unnecessary watch on them, but it would be nice to see that this issue is somehow addressed.
I think instead of this you can just do the following:
Client: client.Options{
Cache: &client.CacheOptions{
DisableFor: []client.Object{
&corev1.ConfigMap{},
&corev1.Secret{},
},
},
},
The reason why using Unstructured helps is because Unstructred is not cached per default
// CacheOptions are options for creating a cache-backed client.
type CacheOptions struct {
// Reader is a cache-backed reader that will be used to read objects from the cache.
// +required
Reader Reader
// DisableFor is a list of objects that should never be read from the cache.
// Objects configured here always result in a live lookup.
DisableFor []Object
// Unstructured is a flag that indicates whether the cache-backed client should
// read unstructured objects or lists from the cache.
// If false, unstructured objects will always result in a live lookup.
Unstructured bool
}