controller-runtime
controller-runtime copied to clipboard
Improve doc over MaxConcurrentReconciles
Would be nice be very nice we are able to provide further information for MaxConcurrentReconciles here for we have a better go doc here.
People have been asking when/how using this option. We might add that It is not possible to have two or more reconcile loops that handle the same object at the same time
and some other further explanations. See that people have also been providing further docs about the topic and would be nice to have something official. e.g: https://openkruise.io/en-us/blog/blog2.html
@camilamacedo86
Any process of issue,
i want to share my experience since it suffer me for several days: if you want to enhance the performance of your controller
, you could set MaxConcurrentReconciles
to a big number, but do not forget to increase the QPS
and Burst
who is for restClient
. or you reconcile
will block when they try to update resource at the end of reconciling.
/assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
It is not possible to have two or more reconcile loops that handle the same object at the same time
Hey @camilamacedo86! Thanks for attaching that blog post, it was very helpful and informative. However from what I understand, it is possible to have two or more loops handling the same object due to the work queue implementation? And considering the dirty
and processing
structures, not sure if the same object can actually be handled by two separate loops at the same time. Please correct me if I've misunderstood something here. Thanks!
The workqueue is specifically designed to protect against concurrently processing the same object. When an object is added to the workqueue, it first gets placed in the dirty
set and gets enqueued only if the object is not in the processing
set. When that object gets pulled from the queue, it is transitioned from the dirty
set to the processing
set. This prevents the object from being enqueued if it is added again. Once the object is finished processing, it is removed from processing
and if it is in the dirty
set, it will be added to the queue again.
Relevant source code: https://github.com/kubernetes/client-go/blob/v0.22.0/util/workqueue/queue.go#L113-L182
This comment also implies that the same object can't be processed at the same time https://github.com/kubernetes-sigs/controller-runtime/blob/v0.9.6/pkg/internal/controller/controller.go#L213
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
People have been asking when/how using this option. We might add that It is not possible to have two or more reconcile loops that handle the same object at the same time and some other further explanations. See that people have also been providing further docs about the topic and would be nice to have something official. e.g: https://openkruise.io/en-us/blog/blog2.html
Also the blog link we have changed to https://openkruise.io/blog/learning-concurrent-reconciling now :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think this would still be valuable. /reopen /remove-lifecycle rotten
@schrej: Reopened this issue.
In response to this:
I think this would still be valuable. /reopen /remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten