netscaler-k8s-ingress-controller
netscaler-k8s-ingress-controller copied to clipboard
Session affinity not working correctly
Setting up session affinity following the guide https://docs.netscaler.com/en-us/citrix-k8s-ingress-controller/how-to/session-affinity.html creates the correct configuration on Netscaler but traffic is still loadbalanced with round robin on kubernetes side. this cause application that requires session affinity not working properly.
To Reproduce Steps:
- creat an app that returns pod name like: `from flask import Flask import pprint import os
class LoggingMiddleware(object): def init(self, app): self._app = app
def __call__(self, env, resp):
errorlog = env['wsgi.errors']
pprint.pprint(('REQUEST', env), stream=errorlog)
def log_response(status, headers, *args):
pprint.pprint(('RESPONSE', status, headers), stream=errorlog)
return resp(status, headers, *args)
return self._app(env, log_response)
app = Flask(name)
@app.route('/') def hello_world(): return f"Hello from {os.environ['HOSTNAME']}"
if name == 'main': app.wsgi_app = LoggingMiddleware(app.wsgi_app) app.run(host='0.0.0.0', port=8080) `
2.set up the app with services and ingress like: `apiVersion: v1 kind: Service metadata: name: myapp-frontend spec: type: NodePort ports:
- port: 80 targetPort: 8080 selector: app: myapp-frontend
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-frontend annotations: ingress.citrix.com/preconfigured-certkey : '{"certs": [ {"name": "mycert", "type": "sni"} ] }' ingress.citrix.com/lbvserver: '{"myapp-frontend":{"persistenceType":"SOURCEIP", "timeout":"10"}}' spec: tls:
- secretName: rules:
- host: "myapp-dev.k8s-test.it.cobra.group"
http:
paths:
- path: / pathType: Prefix backend: service: name: myapp-frontend port: number: 80 `
Ingres controller version is 1.31.3 with Netscaler VPX version NS13.0 91.13.nc
Expected behavior traffic is balanced according the stickiness policy defined in the ingress annotations
@mattiamondini
I've noticed that you've the service "myapp-frontend" as type: NodePort
to be exposed through ADC.
By default, ingress controller configures the nodeIP
as backend endpoints in the service groups.
Thus Enabling persistence on the ADC only ensures that requests from the same clientIP are directed to the same Kubernetes node and not the pod.
Are your Netscaler and Kubernetes cluster nodes in the same network?
@apoorva-05 yes, they are in the same network but in different subnets (not all ports ar visible, but only a specified range).
Thus Enabling persistence on the ADC only ensures that requests from the same clientIP are directed to the same Kubernetes node and not the pod.
i've understood this, but on other ingress controllers i read that in case of session affinity configuration , the affinity is propagated also in teh service balance in some ways.
there's a way to achieve session affinity from ingress to pod with teh netscaler ingress controller?
@mattiamondini
It's possible for services with type: ClusterIP
. In this case, the Ingress controller directly exposes the podIP on the Netscaler VPX/MPX. And your current annotation should be enough to maintain session affinity to the pod.
However, we'll explore whether there's a way to achieve this for the NodePort service itself.
@apoorva-05
we tried using type: ClusterIP
at first, but probably due to the fact that nestcaler cannot access to Kubernetes internal network we were not able to have a working ingress, we thought that for this kind of Services a dual dual tier topology was needed.
Also type: ClusterIP
services are not described in Deployment Topologies documentation.
@mattiamondini
If both ADC and kubernetes cluster are on the same subnet, You can expose the service of type: ClusterIP
via an ingress similar to how you exposed the nodeport service.
Also, when deploying the ingress controller enable the feature-node-watch, so that the controller can add static routes on the ADC to reach the backend pods. (Ref: Link )
- If you are deploying ingress controller directly via yaml manifests set the following in the args section:
- --feature-node-watch true
- If you are deploying via helm charts set the following in values.yaml
nodeWatch: true
If you're interested, we'd be delighted to schedule a call to delve deeper into your topology to explore and recommend the optimal deployment solution tailored to your specific use case. CC: @ankits123 @dheerajng @subashd
@apoorva-05 our ADC is not on the same subnet, but we acan open a port range from Adc to kubernetes cluster subnet if needed.
having a meeting would be great, improve the solution or any suggestion is appreciated.