client-go icon indicating copy to clipboard operation
client-go copied to clipboard

How to restart Informers safely

Open nammn opened this issue 2 years ago • 2 comments

Please correct me if I am wrong but we are running in a somehow complicated situation. As following:

  • one service is the leader who dispatches actions to the n followers
  • the followers are using client-go and can use different informers i.e. follower 1 uses informer a and follower 2 uses informer b
  • there can be situations where we stop an informer a (through the stop channel) on follower 1
  • there can be situations after that where we need to start informer a on follower 1 again

We cannot rely on

func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {

Because the startedInformers map is not getting freed after an informer has stopped syncing, making it stale.

Right now we solve above by running the informers ourselves i.e.

		informer := collector.Informer()
		go informer.Run(cb.stopCh)

But because we want to share the informer with a different go routines/actions we use the same apiclient/informers. They all start them through the factory which is AFACT concurrent safe due to the map. But our direct call does not seem to be. The map is:

func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
...
		if !f.startedInformers[informerType] {
			go informer.Run(stopCh)
			f.startedInformers[informerType] = true
		}

My Questions:

  • is there an alternative way to solve above besides centralising all informer starts i.e. by wrapping
func (f *sharedInformerFactory) Start

?

  • Refactor all of the Starts and make sure that they start each informer once i.e. wrapping or other ways. This is difficult due to the size of the code base. Hence my next suggestion.
  • would it make sense to add a new method to the interface of the factory, something like func (f *sharedInformerFactory) Stop (informer Informer) bool, which removes the informer from the map? With this we could safely rely on the concurrent safe Start() method
  • Something else?

nammn avatar Jul 08 '22 09:07 nammn

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 06 '22 10:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 05 '22 10:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 05 '22 11:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 05 '22 11:12 k8s-ci-robot