managed-upgrade-operator icon indicating copy to clipboard operation
managed-upgrade-operator copied to clipboard

"Observed a panic in reconciler: odd number of arguments passed as key-value pairs for logging"

Open newera-begins opened this issue 10 months ago • 4 comments

This issue is only when upgradeConfig.Spec.UpgradeAt is incorrect and then failed to parse an upgradeAt with the below panic error.

There are few reasons it's missed so far:

  • OCM mode is default and that would always populate the correct time format.
  • LOCAL mode is comparatively less used but regardless, we got this below error.
ts=2025-02-17T11:31:36.694027421Z level=info msg="Starting workers" controller=node controllerGroup= controllerKind=Node workercount=1
ts=2025-02-17T11:31:37.096761184Z level=info logger=upgrade-config-manager msg="Starting the upgradeConfigManager"
ts=2025-02-17T11:31:38.093767169Z level=info logger=controller_upgradeconfig msg="Current cluster status" Request.Namespace=openshift-managed-upgrade-operator Request.Name=managed-upgrade-config status=New
ts=2025-02-17T11:31:38.093982303Z level=dpanic logger=scheduler msg="odd number of arguments passed as key-value pairs for logging" ignoredkey=2025-02-017T11:20:00Z stacktrace="github.com/openshift/managed-upgrade-operator/pkg/scheduler.(*scheduler).IsReadyToUpgrade\n\t/workdir/pkg/scheduler/scheduler.go:40\ngithub.com/openshift/managed-upgrade-operator/controllers/upgradeconfig.(*ReconcileUpgradeConfig).Reconcile\n\t/workdir/controllers/upgradeconfig/upgradeconfig_controller.go:165\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"
ts=2025-02-17T11:31:38.094045953Z level=info msg="Observed a panic in reconciler: odd number of arguments passed as key-value pairs for logging" controller=upgradeconfig controllerGroup=upgrade.managed.openshift.io controllerKind=UpgradeConfig UpgradeConfig="{managed-upgrade-config openshift-managed-upgrade-operator}" namespace=openshift-managed-upgrade-operator name=managed-upgrade-config reconcileID=71eee6df-804e-4219-80da-68e8bda52fe4
panic: odd number of arguments passed as key-value pairs for logging [recovered]
	panic: odd number of arguments passed as key-value pairs for logging

goroutine 136 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x1cab620?, 0xc000bd5670?})
	/usr/lib/golang/src/runtime/panic.go:770 +0x132
go.uber.org/zap/zapcore.CheckWriteAction.OnWrite(0x5b?, 0xb?, {0x0?, 0x0?, 0xc00074a120?})
	pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:196 +0x54
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0008a2410, {0xc00067c840, 0x1, 0x1})
	pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:262 +0x24e
go.uber.org/zap.(*Logger).DPanic(0x211d05b?, {0x2187b85?, 0x1cab620?}, {0xc00067c840, 0x1, 0x1})
	pkg/mod/go.uber.org/[email protected]/logger.go:274 +0x51
github.com/go-logr/zapr.(*zapLogger).handleFields(0xc0003b97a0, 0xffffffffffffffff, {0xc000bd5650, 0x1, 0xc0008892a0?}, {0xc00067c640, 0x1, 0x16?})
	pkg/mod/github.com/go-logr/[email protected]/zapr.go:147 +0xd33
github.com/go-logr/zapr.(*zapLogger).Error(0xc0003b97a0, {0x2456e80, 0xc0004296d0}, {0x213f90f?, 0xc000889348?}, {0xc000bd5650, 0x1, 0x1})
	pkg/mod/github.com/go-logr/[email protected]/zapr.go:207 +0x18d
sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).Error(0xc0003fa7c0, {0x2456e80, 0xc0004296d0}, {0x213f90f, 0x1e}, {0xc000bd5650, 0x1, 0x1})
	pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:142 +0xed
github.com/go-logr/logr.Logger.Error({{0x2477518?, 0xc0003fa7c0?}, 0xc00067c540?}, {0x2456e80, 0xc0004296d0}, {0x213f90f, 0x1e}, {0xc000bd5650, 0x1, 0x1})
	pkg/mod/github.com/go-logr/[email protected]/logr.go:301 +0xec
github.com/openshift/managed-upgrade-operator/pkg/scheduler.(*scheduler).IsReadyToUpgrade(0x2477950?, 0xc000a88300, 0x68c61714000)
	/workdir/pkg/scheduler/scheduler.go:40 +0x10d
github.com/openshift/managed-upgrade-operator/controllers/upgradeconfig.(*ReconcileUpgradeConfig).Reconcile(0xc0001ea420, {0x2473368, 0xc00055e630}, {{{0xc00005d080?, 0x5?}, {0xc000660300?, 0xc000687d10?}}})
	/workdir/controllers/upgradeconfig/upgradeconfig_controller.go:165 +0x848
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x2477950?, {0x2473368?, 0xc00055e630?}, {{{0xc00005d080?, 0xb?}, {0xc000660300?, 0x0?}}})
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000143720, {0x24733a0, 0xc000428c30}, {0x1eb6c80, 0xc0001200e0})
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3bc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000143720, {0x24733a0, 0xc000428c30})
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1be
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 106
	pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x50c

newera-begins avatar Feb 24 '25 10:02 newera-begins

@newera-begins - Thanks to report this! It seems very similar to what was fixed in https://github.com/openshift/managed-upgrade-operator/pull/436 PR. Can you confirm you are using the latest MUO code or a particular commit in your testing?

ravitri avatar Feb 24 '25 10:02 ravitri

I forked the latest version an hour back only

On Mon, 24 Feb, 2025, 16:07 Ravi Trivedi, @.***> wrote:

@newera-begins https://github.com/newera-begins - Thanks to report this! It seems very similar to what was fixed in #436 https://github.com/openshift/managed-upgrade-operator/pull/436 PR. Can you confirm you are using the latest MUO code or a particular commit in your testing?

— Reply to this email directly, view it on GitHub https://github.com/openshift/managed-upgrade-operator/issues/496#issuecomment-2678020212, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQTFH5KQOZEUDBLGWLOOP32RLZANAVCNFSM6AAAAABXXU6AQSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNZYGAZDAMRRGI . You are receiving this because you were mentioned.Message ID: @.***> [image: ravitri]ravitri left a comment (openshift/managed-upgrade-operator#496) https://github.com/openshift/managed-upgrade-operator/issues/496#issuecomment-2678020212

@newera-begins https://github.com/newera-begins - Thanks to report this! It seems very similar to what was fixed in #436 https://github.com/openshift/managed-upgrade-operator/pull/436 PR. Can you confirm you are using the latest MUO code or a particular commit in your testing?

— Reply to this email directly, view it on GitHub https://github.com/openshift/managed-upgrade-operator/issues/496#issuecomment-2678020212, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQTFH5KQOZEUDBLGWLOOP32RLZANAVCNFSM6AAAAABXXU6AQSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNZYGAZDAMRRGI . You are receiving this because you were mentioned.Message ID: @.***>

newera-begins avatar Feb 24 '25 11:02 newera-begins

This is fixed in scheduler pkg, that was a reconcile error in controller pkg. cc @a7vicky

newera-begins avatar Feb 24 '25 11:02 newera-begins

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot avatar May 26 '25 01:05 openshift-bot

/close Fix is already merged #497

a7vicky avatar May 29 '25 07:05 a7vicky

@a7vicky: Closing this issue.

In response to this:

/close Fix is already merged #497

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

openshift-ci[bot] avatar May 29 '25 07:05 openshift-ci[bot]