keda icon indicating copy to clipboard operation
keda copied to clipboard

Cache metrics (values) in Metric Server and honor `pollingInterval`

Open zroubalik opened this issue 4 years ago • 4 comments

Proposal

Currently KEDA Metrics Server scrapes metrics values from external services (triggers) with each request from k8s server. This could overload the external service. https://github.com/kedacore/keda/blob/2446b8ea1b4bc1354b1cfa37ddaebf41fd035bb6/pkg/provider/provider.go#L72 We might want to cache the metric values in Metric Server to reduce the load on the external service and respond to k8s server much faster.

We have pollingInterval property in ScaledObject which is being honored only by KEDA Operator, we could extend this behavior to KEDA Metrics Server as well. There could be a new property in ScaledObject.spec.advanced to configure this in case users would like to not cache metric values and keep the original behavior (if we make the new behavior default).

zroubalik avatar Nov 12 '21 14:11 zroubalik

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jan 22 '22 17:01 stale[bot]

In my use-case, not being able to fully control pollingInterval on the Prometheus metrics results in starting more replicas than required. My metrics logic requires ~60s to settle on a new correct value after starting a new replica.

kefik avatar May 12 '22 07:05 kefik

In my use-case, not being able to fully control pollingInterval on the Prometheus metrics results in starting more replicas than required. My metrics logic requires ~60s to settle on a new correct value after starting a new replica.

What if you specify this in the Prometheus Query?

zroubalik avatar May 12 '22 08:05 zroubalik

This looks like a related issue: https://github.com/kedacore/keda/issues/3291

bjethwan avatar Jun 28 '22 14:06 bjethwan

So the idea is following:

  1. Introduce a new option for specifying pollingInterval for 1<->N scaling:
    1. this could be either on a ScaledObject level
    2. or on a Scaler (trigger level)
  2. This would be an optional value, if it is not set, the current behavior is preserved.
  3. If the option si specified, the metric value would be cached for the specified interval, if there's a request coming from HPA(k8s) we will return from cache if it is within the interval. Otherwise we will poll the external service for a new value.
  4. Since there could be multiple replicas of the same KEDA Metrics Server, we need to share the cache somehow between the replicas. This is hard to implement, so we can do ->
  5. Query the metric value from KEDA Operator and cache the value there (this has also additional benefit in using only one connection to the external service -> now we have 2 connections: Operator & Metrics Server). KEDA Operator will act as a source of the metrics for Metric Server
  6. There will be gRPC server hosted on KEDA Operator and Metric servers will connect to this server for getting the metric value.
  7. This way we have all the metric values in the single place - KEDA Operator. This enables us with possible extension on this part in the future (we can modify the metric values before they are exposed to HPA).

💩 I am not sure how to name this new option, honestly the best would be to name it pollingInterval and rename current property with this name to activationPollingInterval, this would make our spec consistent and in sync with threshold & activationThreshold properties. But it is a huge breaking change, so I am not sure if it is doable.

zroubalik avatar Nov 08 '22 12:11 zroubalik

Sorry for the late response. As I said during the standup, I'd use the metrics server as server because that opens the scenario where different KEDA operator instances from different namespaces, push metrics to the shared metrics server. With this change, the operator instances must know where the metrics server is (single service in a namespace), which is easier than the metrics server knowing where are (single service across multiple namespaces).

We should think about how to push/pull the metrics between different metrics server instances, as starting point I guess that we could assume the topology is 1-1, but from reliability pov is good idea having 2 or more instances. Maybe we could create a mesh between metrics servers to share the information, but maybe the best idea is just creating a gRPC connections slice and push the metrics using all of them. In background, we could watch (k8s) endpoints for the service and use the chagnes to establish and add new connections to the slice.

WDYT?

JorTurFer avatar Nov 14 '22 22:11 JorTurFer

I do understand the point you have and I agree. Though in the first iteration I'd try to make this feature as simple as possible. Creating the unary RPC client(metrics server)->server(opeator) is much easier than the opposite direction server(metrics server)->client(operator), because the flow of request is going from the metrics server side - this side is making the call for metrics. Once this proves to be functional, we can think about more complext stuff.

Alternative approach to support multi namespace installations would be to open a client connection from Metrics Server for each installation and send the request the the respective Operator. The filtering would be easy, as the namespace is part of the request from the k8s api when asking for metrics. So we would know (in Metrics Server) which operator ask for the metric. This topology also supports mutliple Metrics Server replicas, as those could be independent because all stuff is stored in the operator. The registration of a new namespaced KEDA installation could be done 1) manually by user by editing the Metrics Server config (a configmap?) or 2) we can have a separate grpc server hosted on metrics server just for the registration purposes.

zroubalik avatar Nov 15 '22 09:11 zroubalik

okey, let's go with your design and once we check it works, we can modify how we stablish the connections. For the moment we could go 1-1 and then we can think how to solve the connection management. I think that watching KEDA namespaced endpoints to detect new instances and stablish new connections based on that should be a small change

JorTurFer avatar Nov 15 '22 21:11 JorTurFer

I'm thinking on the approach and I have one question, will we enforce 1 single operator instance? Docs explain that multiple instances are allowed for HA stuff, even a single instance will work. So we could break the behaviour to them if we stablish the connection against the wrong instance

JorTurFer avatar Nov 15 '22 22:11 JorTurFer

I'm thinking on the approach and I have one question, will we enforce 1 single operator instance? Docs explain that multiple instances are allowed for HA stuff, even a single instance will work. So we could break the behaviour to them if we stablish the connection against the wrong instance

The gRPC server start will be hooked on the leader election -> so it will run only on the active instance.

zroubalik avatar Nov 16 '22 11:11 zroubalik

The gRPC server start will be hooked on the leader election -> so it will run only on the active instance.

Nice!, I have to think about this slowly to discover potential gaps/issues, but this probably is better solution than mine. Congrats! 👏

JorTurFer avatar Nov 16 '22 12:11 JorTurFer

The gRPC server start will be hooked on the leader election -> so it will run only on the active instance.

Nice!, I have to think about this slowly to discover potential gaps/issues, but this probably is better solution than mine. Congrats! 👏

see this https://github.com/kedacore/keda/pull/3861/commits/19cd30155c65e1da7fb51d71bf74366d768b104b

zroubalik avatar Nov 16 '22 12:11 zroubalik

@zroubalik I think we can close this?

tomkerkhove avatar Dec 12 '22 10:12 tomkerkhove

@tomkerkhove there is one outstanding sub issue TBD, I'd keep this open till we fix it.

zroubalik avatar Dec 12 '22 12:12 zroubalik