Dynamic header matching in HTTPRoute
What would you like to be added: A standard API for routing based on matching metadata derived from dynamically evaluated request headers.
Why this is needed:
Users want to be able to route HTTP requests based on a validated JWT associated with the request.
Istio, for example, supports this by using header names with a special prefix (@) in a httproute header match configuration, to indicate that the matching is not against a static header value, but rather the metadata associated with the validated JWT: https://istio.io/latest/docs/tasks/security/authentication/jwt-route/#configuring-ingress-routing-based-on-jwt-claims
Although Istio could support this in Gateway API routes with some implementation specific (e.g., prefix) solution, it would be better to define an optional standard API for this kind of routing in Gateway API.
Maybe simply allowing some prefix (e.g., @) in a header name to allow for implementation-specific dynamic header matching would be minimally sufficient? Beyond that, defining some standard dynamic header names (like the one to use to match JWT metadata, for example) would also be nice to have in the Gateway spec.
This came up in a conversation between myself, @kflynn and @robscott a few weeks back and I have been slow at starting the write-up.
Looking across existing implementations (not all Gateway API of course) there are respectable differences that need to be surfaced.
Not to mention there are the two use cases of validation and claims checking.
Let me see how quickly I can get my current (and not polished) write-up checked in for review and to start the broader process.
Another use case which is similar is just things like set: x-client-country: %CLIENT_COUNTRY% or something. I think most proxies support this in some way or another.
Standardization seems near impossible - but nice as an extension?
I wonder if combined with here https://github.com/kubernetes-sigs/gateway-api/issues/2166 , a generalized case here is implementation specific routing - data plane provides some parameters which can be used to make a routing decision.
Perhaps something like this could work:
Client IP based routing:
- matches:
proxyKeys:
- name: $client_ip # proxy specific
type: CIDR # proxy specific
value: 192.168.0.0
backendRefs:
- name: backend-local
port: 80
JWT claim based routing:
- matches:
proxyKeys:
- name: $jwt_claim_group # proxy specific
type: Exact # proxy specific
value: group-1
backendRefs:
- name: backend-group1
port: 80
Cookie based routing:
- matches:
proxyKeys:
- name: $cookie_version # proxy specific
type: Exact # proxy specific
value: v2
backendRefs:
- name: backend-v2
port: 80
Okay. Time to check my doc in, though I am not really 'ready' - but the conversation is. Where is the place to get the GEP checked in?
I think it comes down to: Policy with extensions, or Policy and Extensions. There is a reusable part here that leads to Polic-y/ies
Where is the place to get the GEP checked in?
@brianehlert If you'd like to use this issue to start the GEP, then I think you just create a PR with a file named geps/gep-2198.md to start the discussion.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.