dapr icon indicating copy to clipboard operation
dapr copied to clipboard

Operator logging terminates unexpectedly

Open wlfgang opened this issue 3 weeks ago • 0 comments

This seems related to #7607, but I didn't dig into what the fix was.

In what area(s)?

/area operator

What version of Dapr?

1.16.3

Expected Behavior

When I deploy Dapr to a k8s cluster using Helm, the operator should log output continuously and logging should not terminate.

Actual Behavior

I deployed Dapr via Helm chart v1.16.3. The operator appeared to start normally:

time="2025-12-04T13:36:15.398447803Z" level=info msg="Starting Dapr Operator -- version 1.16.3 -- commit 08ccc4577dc702ca35116465ce0b7d7ee89f9b8e" instance=dapr-operator-5cd5bb656f-92q7l scope=dapr.operator type=log ver=1.16.3 
time="2025-12-04T13:36:15.398496283Z" level=info msg="Log level set to: info" instance=dapr-operator-5cd5bb656f-92q7l scope=dapr.operator type=log ver=1.16.3

However, immediately after deploying an application, the operator log shows the following message, and logging stops.

[controller-runtime[] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
    >  goroutine 360 [running[]:
    >  runtime/debug.Stack()
    >      /opt/hostedtoolcache/go/1.24.9/x64/src/runtime/debug/stack.go:26 +0x5e
    >  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xcd
    >  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithValues(0xc0002f5bc0, {0xc000c04e20, 0x2, 0x2})
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:168 +0x49
    >  github.com/go-logr/logr.Logger.WithValues(...)
    >      /home/runner/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:332
    >  sigs.k8s.io/controller-runtime/pkg/builder.(*TypedBuilder[...[]).doController.func1()
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/builder/controller.go:449 +0x1b3
    >  sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...[]).reconcileHandler(0x227e280, {0x226b370, 0xc000517db0}, {{{0xc0005340f0, 0x21}, {0xc0005340f0, 0x21}}})
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:293 +0x13c
    >  sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...[]).processNextWorkItem(0x227e280, {0x226b370, 0xc000517db0})
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263 +0x20d
    >  sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...[]).Start.func2.2()
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:224 +0x85
    >  created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...[]).Start.func2 in goroutine 219
    >      /home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:220 +0x48d

Steps to Reproduce the Problem

The issue may be k3s- or k3d-specific, as I haven't tested it elsewhere.

Create a k3d cluster:

k3d --version
k3d version v5.8.3
k3s version v1.31.5-k3s1 (default)
k3d cluster create

Install dapr using helm chart 1.16.3 using the helm CLI:

helm upgrade --install dapr dapr/dapr \
--version=1.16.3 \
--namespace dapr-system \
--create-namespace \
--wait

You may need to wait one minute or so after installing Dapr, and before triggering the bug. It seems timing-dependent.

After waiting, create a new deployment in the cluster:

kubectl create deployment my-nginx --image=nginx:latest

Note that adding Dapr pod labels isn't required.

View the operator logs:

kubectl logs deployment/dapr-operator --namespace dapr-system

Release Note

RELEASE NOTE: FIX Operator logging terminates unexpectedly.

wlfgang avatar Dec 04 '25 14:12 wlfgang