cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[occm] portProtocolMapping="internal error: json: unsupported type: map[openstack.listenerKey]*listeners.Listener>"

Open judge-red opened this issue 1 year ago • 1 comments

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

While debugging (a probably unrelated) issue, I found these log lines which might be a bug, so I thought I'd report them:

I1019 07:43:31.502200 11 loadbalancer.go:1691] "Load balancer ensured" lbID="dbeba37e-aa8e-45a2-bc76-a812e8d58df3" isLBOwner=true createNewLB=false I1019 07:43:31.502307 11 loadbalancer.go:1702] "Existing listeners" portProtocolMapping="<internal error: json: unsupported type: map[openstack.listenerKey]*listeners.Lis tener>" I1019 07:43:31.515417 11 loadbalancer.go:1691] "Load balancer ensured" lbID="ffc5d6ec-86f2-4cf3-b9d0-5e6f11b7b324" isLBOwner=true createNewLB=false I1019 07:43:31.515449 11 loadbalancer.go:1702] "Existing listeners" portProtocolMapping="<internal error: json: unsupported type: map[openstack.listenerKey]*listeners.Lis tener>"

What you expected to happen:

no internal error :)

How to reproduce it:

No idea. But might need to use the OVN provider.

Anything else we need to know?:

Sorry for the terrible report, I have no idea what this is about - just saw an unexpected log line (that appears quite frequently).

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: v1.30.0
  • OpenStack version: Zed?
  • Others: load-balancers API 2.26

judge-red avatar Oct 19 '24 11:10 judge-red

Attaching a log file (-v=4) where this is often encountered.

openstack-cloud-controller-manager-kmswx.log

judge-red avatar Oct 19 '24 11:10 judge-red

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 17 '25 11:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 16 '25 12:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 18 '25 12:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Mar 18 '25 12:03 k8s-ci-robot