ingress-gce
ingress-gce copied to clipboard
MultiClusterService: max-rate
Hi,
I would like to manage the max-rate-per-endpoint
via the MultiClusterService
or BackendConfig
k8s manifest.
Currently even if I change the "Max RPS" value for an endpoint via the console, after few minutes gets reset to the default value (100000000).
I have tried to add:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: name
annotations:
networking.gke.io/max-rate-per-endpoint: "1"
or
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: name
annotations:
cloud.google.com/backend-config: '{"default": "{{ .Release.Name }}"}'
networking.gke.io/max-rate-per-endpoint: "1"
or modifying the service created by MultiClusterService
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"exposed_ports":{"8080":{}}}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"5000":"..."},"zones":["us-central1-a","us-central1-c","us-central1-f"]}'
networking.gke.io/max-rate-per-endpoint: "1"
But the console always shows: Max RPS: 100000000 (per endpoint)
Thanks
/assign @swetharepakula
I get the same behaviour even if the MultiClusterService
is created from scratch with the networking.gke.io/max-rate-per-endpoint: "1"
annotation
Also I have noticed that if you create an Ingress (not MultiClusterIngress) the default is Max RPS: 1 (per endpoint)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/kind feature
/lifecycle frozen