ingress-nginx
ingress-nginx copied to clipboard
Send different headers from canary ingress vs. stable
Hello,
My team is trying to coordinate a 'canary' release across several different services. We currently release all of our services at once, via a rolling update. We want to keep the all-at-once pattern, but transition to canary releases.
One service that's part of the release is our UI. If we are to deploy a canary version of the UI, we would want it to only communicate with canary versions of our other services (as opposed to communicating with the stable versions of those services).
To achieve this, I'd like to have our UI ingress to respond with a cookie saying whether it is or is not a canary, as I've outlined below. This cookie would then live in the browser, and be passed along in any subsequent API requests that the UI would make.
#canary
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: 'true'
nginx.ingress.kubernetes.io/canary-by-cookie: X-Canary # This is for all of our services, and will determine whether a request goes to canary or stable
nginx.ingress.kubernetes.io/canary-by-header: X-Canary
nginx.ingress.kubernetes.io/configuration-snippet: |
add_header Set-Cookie "X-Canary=Always; Path=/; Secure;"; # note the difference between this and the stable counterpart below
#stable
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
add_header Set-Cookie "X-Canary=Never; Path=/; Secure;";
However, my current approach with configuration-snippet
won't work. There is this note in the nginx-ingress documentation:
"Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity."
Effectively, this means that both canary and stable ingress send the same header.
What do you want to happen?
The canary ingress to be able to send a different header than the stable ingress
Is there currently another issue associated with this?
No
Does it require a particular kubernetes version?
Afaik, no
@tall-dan: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What dies this mean ?
that means that we want canary services to only talk to canary services,
What dies this mean ?
that means that we want canary services to only talk to canary services,
We release multiple services at the same time. If we want to canary-ize our release, that means that a given environment would have ServiceA, ServiceB, ServiceC, etc. and also ServiceA-Canary, ServiceB-Canary, ServiceC-Canary, etc.
More than one service may be required to respond to any http request, eg. ServiceA calls ServiceB before providing a final response. We want the request to be canary-or-not throughout it's lifecycle, so either ServiceA -> ServiceB or ServiceA-Canary -> ServiceB-Canary, but not ServiceA -> ServiceB-Canary.
The first service that a user is going to encounter is going to be our UI, which can be canary or stable. I would like the UI to know whether it is canary or stable, so it can pass that information onto API requests it makes. To accomplish that, I'd like the request that loads the UI (which would hit the UI ingress) to come back with a set-cookie: X-Canary <always | never>
header. That cookie would then be stored in the user's browser and sent along with any subsequent requests that the UI makes. The services that will receive that request will route it based on the value of the X-Canary
cookie, thus achieving the goal of 'canary only talks to canary' (for the UI -> 1st backend service request. backend service -> backend service communication won't rely on cookies and will be handled separately)
The heart of the feature request is set-cookie: X-Canary <always | never>
. I would like a canary ingress to be able to set the value to always
, and its corresponding stable ingress to set it to never
.
Thanks for looking at this, and let me know if I can answer any other questions!
Why use ingress for serviceA to serviceB. They should be talking internally.
Thanks, ; Long
On Wed, 4 May, 2022, 9:11 PM Daniel Schepers, @.***> wrote:
What dies this mean ?
that means that we want canary services to only talk to canary services,
We release multiple services at the same time. If we want to canary-ize our release, that means that a given environment would have ServiceA, ServiceB, ServiceC, etc. and also ServiceA-Canary, ServiceB-Canary, ServiceC-Canary, etc.
More than one service may be required to respond to any http request, eg. ServiceA calls ServiceB before providing a final response. We want the request to be canary-or-not throughout it's lifecycle, so either ServiceA -> ServiceB or ServiceA-Canary -> ServiceB-Canary, but not ServiceA -> ServiceB-Canary.
The first service that a user is going to encounter is going to be our UI, which can be canary or stable. I would like the UI to know whether it is canary or stable, so it can pass that information onto API requests it makes. To accomplish that, I'd like the request that loads the UI (which would hit the UI ingress) to come back with a set-cookie: X-Canary <always | never> header. That cookie would then be stored in the user's browser and sent along with any subsequent requests that the UI makes. The services that will receive that request will route it based on the value of the X-Canary cookie, thus achieving the goal of 'canary only talks to canary' (for the UI -> 1st backend service request. backend service -> backend service communication won't rely on cookies and will be handled separately)
The heart of the feature request is set-cookie: X-Canary <always | never>. I would like a canary ingress to be able to set the value to always, and its corresponding stable ingress to set it to never.
Thanks for looking at this, and let me know if I can answer any other questions!
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8549#issuecomment-1117506473, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSMDNYSLUG5UEB7O7LVIKLCPANCNFSM5VCGOZ2A . You are receiving this because you commented.Message ID: @.***>
Right, communication between backend services isn't really the focus of this feature request. Let's ignore that and focus on the UI and any API requests coming out of it.
The UI is a service, and uses an ingress. I'd like that ingress to set a cookie that says whether or not it's a canary. That cookie would then be sent along with any subsequent API requests (to ServiceB, ServiceC, etc.)
I'm going to edit the issue body to make this a little more clear
ok. What will really help is that you write down all the yaml for all the objects you will create like ;
- pods/deployments (maybe alpine:nginx for stable and https:alpine for canary)
- services
- ingresses
Then write the http request as curl commands in full
Follow that up with the expected results.
I am a little confused because the only 2 valid values documented are "always" and "never" ;
nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
@longwuyuan Sure, I've set up an end-to-end example using minikube. The gist referenced below contains all the yamls for all the necessary objects. It can be found here
Setup
minikube start -p ingress-test
minikube addons enable ingress -p ingress-test
kubectl apply -f https://gist.githubusercontent.com/tall-dan/24e94e8548ea97fce4f577795a853a9f/raw/f1bdef5f7b13bf8d97c9cabc393808a6ea9480ba/kubernetes_resources.yaml
minikube profile list # grab ip address from ingress-test
sudo vim /etc/hosts # create new entry: <ip from above> example.info
Example curls
Hitting stable (this behaves the way I want)
➜ curl example.info -i --header 'X-Canary: never' -s
HTTP/1.1 200 OK
Date: Mon, 23 May 2022 15:13:40 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 60
Connection: keep-alive
Set-Cookie: X-Canary=never; Path=/; Secure; # <-- I've specified 'never' in my request, so the ingress appending the 'never' cookie is good
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-k788l
Hitting canary (this is where the issue lies)
➜ curl example.info -i --header 'X-Canary: always' -s
HTTP/1.1 200 OK
Date: Mon, 23 May 2022 15:10:27 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 61
Connection: keep-alive
Set-Cookie: X-Canary=never; Path=/; Secure; # <-- This is where I want to see 'always' instead of 'never', because I've dictated it on this line https://gist.github.com/tall-dan/24e94e8548ea97fce4f577795a853a9f#file-kubernetes_resources-yaml-L165
Hello, world!
Version: 2.0.0
Hostname: web2-5d47994f45-7qspx
Notes
- I'm sending the
'X-Canary'
header above to reliably dictate which ingress I hit. This wouldn't be true in the wild, the ingress would be dictated by traffic weight - I extracted all of the resource definitions from kubernetes after I'd made sure they worked on my machine; there might be some extra cruft in there.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@tall-dan i am also facing the same issue. is your issue resolved?
If yes can you share the solution
@ayushpadia no, we never did resolve this.
@longwuyuan Can this be considered? We have a similar case actually
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.