contour
contour copied to clipboard
max connection annotations not being taken into account with extension services for external authorization
What steps did you take and what happened: [A clear and concise description of what the bug is.]
- Followed guide: https://projectcontour.io/docs/1.28/guides/external-authorization/ to get an external auth setup
- Alter the service projectcontour-auth/htpasswd to have annotations
annotations:
projectcontour.io/max-connections: "8192"
projectcontour.io/max-pending-requests: "8192"
projectcontour.io/max-requests: "8192"
- Run the following commands to expose envoy admin api
ENVOY_POD=$(kubectl -n projectcontour get pod -l app=envoy -o name | head -1)
kubectl -n projectcontour port-forward $ENVOY_POD 9001
- In another terminal window check extension service is now set to 8192 via api call curl http://localhost:9001/clusters
After checking though I see extension service is still set to 1024 for max_connections, max_pending_requests, and max_requests.
What did you expect to happen:
I expected for the annotations to function like http proxy services and be picked up and changed in the envoy clusters
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
I went through the guide to have a repeatable pattern. I tested via my local rancher-desktop cluster but I saw the same result with contour 1.25 on an EKS cluster running 1.26.x.
Let me know if you need anymore info.
Environment:
- Contour version: 1.28
- Kubernetes version: (use
kubectl version):
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.7+k3s1
- Kubernetes installer & version: rancher-desktop v1.13.1
- Cloud provider or hardware configuration: Mac os M1
- OS (e.g. from
/etc/os-release): Senoma 4.4.1 (23E224)
Hey @codymoss-bnet! Thanks for opening your first issue. We appreciate your contribution and welcome you to our community! We are glad to have you here and to have your input on Contour. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack