Support per-request-level shouldBypassCache configuration in the client Reader implementation
When getting an Object through the default client, whether to read from the cache has been determined based on client.Options when creating the Client. In our scenario, for a same Object, most of the time we want to read it from the local cache, but in a few scenarios with high consistency requirements, we need to read it from apiserver directly. At present, we create two clients, one with cache and one without bringing implementation. Can we support using the same client to achieve this ability, such as putting a Value in the context or add a GetOption/ListOption to decide whether to bypass cache for this request?
I'm not sure if this is a general requirement. If so, I can implement it
There's also the APIReader which never uses the cache.
Not sure about adding an option. The problem is it wouldn't make sense for a lot of implementations of the Client interface
There's also the APIReader which never uses the cache.
Not sure about adding an option. The problem is it wouldn't make sense for a lot of implementations of the Client interface
cluster.GetClient() and cluster.GetAPIReader() are essentially two clients, one with cache and one without. This is also our current implementation, but I don't think it's that elegant because we have to build two clients (we only used the controller-runtime client package to replace client-go ), so I want to discuss whether there is a solution that can be solved with only one client
What is your use-case for wanting this?
What is your use-case for wanting this?
@alvaroaleman Sorry for the late reply. We deploy our business workloads through a set of pods, with one pod acting as the leader and the others as followers. Additionally, we have a component responsible for the high availability of the business within this set of pods. This component uses the controller-runtime/pkg/client to access the Kubernetes API server. It labels/annotates the pods to record some information like Role.
Most of the time, this component can get pod informations through the client cache. However, when the leader is not working, the high availability component needs to access the api-server directly to ensure the consistency of the labels and annotations on the pods. Currently, our solution involves creating two clients: one configured with a cache and one without. This approach feels inelegant and results in a lot of redundant client configuration code. Therefore, I would like to discuss whether it is possible to support a per-request-level cache toggle using a single client, like:
cli.Get(WithDisableCache(ctx), ...)
However, when the leader is not working, the high availability component needs to access the api-server directly to ensure the consistency of the labels and annotations on the pods
Can you eloborate on why you can not use the cache there?
Can you eloborate on why you can not use the cache there?
Mainly because in some scenarios, the data in the cache may have latency and consistency issues, such as api-server overload or network jitter. However, we can still read outdated data from the cache in these scenarios, which can cause our processes to continue executing with incorrect data, and we prefer it to directly return an error if we cannot read the latest pod annotations
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.