client-go icon indicating copy to clipboard operation
client-go copied to clipboard

data race with informer usage

Open Icarus9913 opened this issue 2 years ago • 1 comments

Environment

client-go version: k8s.io/client-go v0.24.0

Description

I used InformerFactory in my project to watch Pod event. And I met data race with the log telling something wrong in client-go

Logs

WARNING: DATA RACE
Write at 0x00c0003975e0 by goroutine 157:
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:667 +0x234
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Previous write at 0x00c0003975e0 by goroutine 152:
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:667 +0x234
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Goroutine 157 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:55 +0xdb
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:436 +0x744
  k8s.io/client-go/informers.(*sharedInformerFactory).Start.func2()
      /src/vendor/k8s.io/client-go/informers/factory.go:134 +0x59

Goroutine 152 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:55 +0xdb
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:436 +0x744
  github.com/spidernet-io/spiderpool/pkg/gcmanager.(*SpiderGC).startPodInformer.func2()
      /src/pkg/gcmanager/pod_informer.go:29 +0x59
==================
WARNING: DATA RACE
Read at 0x00c0003bb748 by goroutine 162:
  k8s.io/utils/buffer.(*RingGrowing).WriteOne()
      /src/vendor/k8s.io/utils/buffer/ring_growing.go:55 +0x6a
  k8s.io/client-go/tools/cache.(*processorListener).pop()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:800 +0x228
  k8s.io/client-go/tools/cache.(*processorListener).pop-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Previous write at 0x00c0003bb748 by goroutine 139:
  k8s.io/utils/buffer.(*RingGrowing).ReadOne()
      /src/vendor/k8s.io/utils/buffer/ring_growing.go:41 +0x2e4
  k8s.io/client-go/tools/cache.(*processorListener).pop()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:787 +0x4e7
  k8s.io/client-go/tools/cache.(*processorListener).pop-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Goroutine 162 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:665 +0xf1
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Goroutine 139 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:665 +0xf1
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73
==================
==================
WARNING: DATA RACE
Read at 0x00c0003bb740 by goroutine 162:
  k8s.io/utils/buffer.(*RingGrowing).WriteOne()
      /src/vendor/k8s.io/utils/buffer/ring_growing.go:70 +0x38e
  k8s.io/client-go/tools/cache.(*processorListener).pop()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:800 +0x228
  k8s.io/client-go/tools/cache.(*processorListener).pop-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Previous write at 0x00c0003bb740 by goroutine 139:
  k8s.io/utils/buffer.(*RingGrowing).ReadOne()
      /src/vendor/k8s.io/utils/buffer/ring_growing.go:48 +0x4c7
  k8s.io/client-go/tools/cache.(*processorListener).pop()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:787 +0x4e7
  k8s.io/client-go/tools/cache.(*processorListener).pop-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Goroutine 162 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:665 +0xf1
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Goroutine 139 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0xdc
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:665 +0xf1
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /src/vendor/k8s.io/client-go/tools/cache/shared_informer.go:668 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x73

Icarus9913 avatar Aug 04 '22 08:08 Icarus9913

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 02 '22 09:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 02 '22 09:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 01 '23 10:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 01 '23 10:01 k8s-ci-robot