client-go
client-go copied to clipboard
How to restart Informers safely
Please correct me if I am wrong but we are running in a somehow complicated situation. As following:
- one service is the
leader
who dispatches actions to then
followers
- the followers are using
client-go
and can use different informers i.e. follower1
uses informera
and follower2
uses informerb
- there can be situations where we stop an informer
a
(through the stop channel) on follower1
- there can be situations after that where we need to start informer
a
on follower1
again
We cannot rely on
func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
Because the startedInformers
map is not getting freed after an informer has stopped syncing, making it stale
.
Right now we solve above by running the informers ourselves i.e.
informer := collector.Informer()
go informer.Run(cb.stopCh)
But because we want to share the informer with a different go routines/actions we use the same apiclient/informers. They all start them through the factory which is AFACT concurrent safe due to the map. But our direct call does not seem to be. The map is:
func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
...
if !f.startedInformers[informerType] {
go informer.Run(stopCh)
f.startedInformers[informerType] = true
}
My Questions:
- is there an alternative way to solve above besides centralising all informer starts i.e. by wrapping
func (f *sharedInformerFactory) Start
?
- Refactor all of the Starts and make sure that they start each informer once i.e. wrapping or other ways. This is difficult due to the size of the code base. Hence my next suggestion.
- would it make sense to add a new method to the interface of the factory, something like
func (f *sharedInformerFactory) Stop (informer Informer) bool
, which removes the informer from the map? With this we could safely rely on the concurrent safeStart()
method - Something else?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.