ingress-gce icon indicating copy to clipboard operation
ingress-gce copied to clipboard

MultiClusterService: max-rate

Open yellowhat opened this issue 1 year ago • 7 comments

Hi, I would like to manage the max-rate-per-endpoint via the MultiClusterService or BackendConfig k8s manifest. Currently even if I change the "Max RPS" value for an endpoint via the console, after few minutes gets reset to the default value (100000000).

I have tried to add:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: name
  annotations:
    networking.gke.io/max-rate-per-endpoint: "1"

or

apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
  name: name
  annotations:
    cloud.google.com/backend-config: '{"default": "{{ .Release.Name }}"}'
    networking.gke.io/max-rate-per-endpoint: "1"

or modifying the service created by MultiClusterService

apiVersion: v1
kind: Service
metadata:
 annotations:
   cloud.google.com/neg: '{"exposed_ports":{"8080":{}}}'
   cloud.google.com/neg-status: '{"network_endpoint_groups":{"5000":"..."},"zones":["us-central1-a","us-central1-c","us-central1-f"]}'
   networking.gke.io/max-rate-per-endpoint: "1"

But the console always shows: Max RPS: 100000000 (per endpoint)

Thanks

yellowhat avatar Oct 03 '23 15:10 yellowhat

/assign @swetharepakula

gauravkghildiyal avatar Oct 03 '23 22:10 gauravkghildiyal

I get the same behaviour even if the MultiClusterService is created from scratch with the networking.gke.io/max-rate-per-endpoint: "1" annotation

yellowhat avatar Oct 09 '23 08:10 yellowhat

Also I have noticed that if you create an Ingress (not MultiClusterIngress) the default is Max RPS: 1 (per endpoint)

yellowhat avatar Oct 09 '23 08:10 yellowhat

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 29 '24 23:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 28 '24 23:02 k8s-triage-robot

/kind feature

swetharepakula avatar Mar 07 '24 00:03 swetharepakula

/lifecycle frozen

swetharepakula avatar Mar 07 '24 00:03 swetharepakula