kubernetes-ingress
kubernetes-ingress copied to clipboard
Inconsitent balancing
Describe the Bug
HAProxy Ingress Controller fails to consistently route requests to the appropriate pods based on the specified RouteToken parameter in the URL. To reproduce this inconsistency I have created a demoapp. I have explained all the details of the app what it tries to do in the demoapp README
Steps to Reproduce
- Install HAProxy Ingress Controller with Nodeport service type.
- Install the demoapp using the provided Helm chart. Utilize the test-ha.py script to test the behavior.
python test-ha.py http://test.demoapp:<whatever-nodeport-you-have-configured>
Expected Behavior
Requests with a specific RouteToken parameter in the URL should consistently reach the demoapp pod associated with that RouteToken. This behavior should persist even with a high number of demoapp pods.
Actual Behavior
Requests do not consistently reach the correct demoapp pod based on the RouteToken. The observed behavior deviates from the expected behavior, especially when the number of demoapp pods is increased.
Version
HAproxy Ingress Controller:1.10.10
Additional Context
- The HAProxy Ingress Controller is configured to balance based on
url_param RouteToken. - The Ingress object for the demoapp is configured with relevant annotations for HAProxy.
- The issue is reproducible with a provided testing script test-ha.py and a demoapp setup with a significant number of pods (e.g., 50).
- Additional details and the demo app setup are available in this GitHub repository.
- I have tried Community HAproxy Ingress Controller and Nginx Ingress Controller, I don't face this problem : )
hi @Rash419 thx for the report, we will look at it.
Hi @Rash419 , I'm in the process of reproducing the issue with your inputs. It greatly helps, thanks.
Hi @Rash419 , I tried to reproduce the issue you submitted with your inputs. It seems at one point I could see it but now by retrying with a fresh cluster, I can't get it anymore. Could you try on your side as I did ? Please then create a new fresh cluster with Kind and the creation script we provide here. The version of Kind I'm using is:
kind v0.22.0 go1.20.13 linux/amd64
From the project root directory you run:
deploy/tests/create.sh
Then you can change the version in the created deployment to align with 1.10.10. I diverge in a few points from your protocol. I use an Ubuntu pod to run the test by:
kubectl run -it --rm=true --image=ubuntu myshell -- bash
Then I install what I need with:
apt update && apt install -y python3 && apt install -y pip && pip install requests
I copy your python script with:
cat <<EOF> script.py
and run it with:
python3 ./script.py http://podip:podport
The host header can be added in the script by changing:
response = requests.get(url)
to
headers = {'Host': 'test.demoapp'}
response = requests.get(url, headers=headers)
@ivanmatmati Thanks, I will take a look
kind v0.22.0 go1.20.13 linux/amd64
@ivanmatmati how to install Kind ? Can you explain how to install it ?
You can find the infos in this page. Does it help ?
@ivanmatmati Sorry I don't get it. You can test it without installing demo app ? Is that so ? because I don't see "Installing demo app" step in the given instruction
No, of course you need the demo app. You can install it when the cluster is up and of course before running the test.
@ivanmatmati so I installed everything but for some reason request doesn't reach to demoapp pod
10.244.0.1:32219 [29/Feb/2024:08:23:34.883] http haproxy-controller_default-local-service_http/SRV_1 0/0/0/0/0 404 135 - -
---- 1/1/0/0/0 0/0 "GET test.demoapp:30080/serverId HTTP/1.1"
2024/02/29 08:23:50 ERROR global.go:63 [transactionID=0318c5be-ea69-42bf-9732-611a986e53cf] Global config: annotation cr-
global: custom resource 'haproxy-controller/global-full' doest not exist
2024/02/29 08:23:50 ERROR global.go:67 [transactionID=0318c5be-ea69-42bf-9732-611a986e53cf] Global logging: annotation cr
-global: custom resource 'haproxy-controller/global-full' doest not exist
2024/02/29 08:24:21 ERROR global.go:63 [transactionID=fb8e546f-ce02-4d97-a6b1-509163558131] Global config: annotation cr-
global: custom resource 'haproxy-controller/global-full' doest not exist
2024/02/29 08:24:21 ERROR global.go:67 [transactionID=fb8e546f-ce02-4d97-a6b1-509163558131] Global logging: annotation cr
-global: custom resource 'haproxy-controller/global-full' doest not exist
2024/02/29 08:24:23 ERROR global.go:63 [transactionID=dc009f79-3cff-470d-ab9e-c0b0a004c5c1] Global config: annotation cr-
global: custom resource 'haproxy-controller/global-full' doest not exist
2024/02/29 08:24:23 ERROR global.go:67 [transactionID=dc009f79-3cff-470d-ab9e-c0b0a004c5c1] Global logging: annotation cr
-global: custom resource 'haproxy-controller/global-full' doest not exist
10.244.0.1:48935 [29/Feb/2024:08:25:17.070] http haproxy-controller_default-local-service_http/SRV_1 0/0/0/0/0 404 135 - -
---- 1/1/0/0/0 0/0 "GET localhost:30080/serverId HTTP/1.1"
10.244.0.1:1642 [29/Feb/2024:08:25:55.891] http haproxy-controller_default-local-service_http/SRV_1 0/0/0/0/0 404 135 - - -
--- 2/2/0/0/0 0/0 "GET test.demoapp:30080/serverId HTTP/1.1"
I guess I'm hitting a similar issue caused by backend servers being injected in different order among haproxy's instances. It was driving me nuts.
Our case is consisteng hasing on a custom header among 3 varnish instances (created with varnish operator).
I tested all haproxys generate the same hash for the same header, but then the hash tree must be different between instances and that probed correct, since server list is different:
# for pod in $(kubectl get pod -l app.kubernetes.io/instance=haproxy0 -o name); do echo "- $pod:"; kubectl exec -i -t $pod -- cat /etc/haproxy/haproxy.cfg | grep 6081 | grep -v disabled ; done
- pod/haproxy0-kubernetes-ingress-5b9f87996f-947gv:
server SRV_1 10.2.11.89:6081 enabled
server SRV_2 10.2.10.68:6081 enabled
server SRV_3 10.2.11.44:6081 enabled
- pod/haproxy0-kubernetes-ingress-5b9f87996f-fm8vl:
server SRV_1 10.2.10.68:6081 enabled
server SRV_2 10.2.11.44:6081 enabled
server SRV_3 10.2.11.89:6081 enabled
- pod/haproxy0-kubernetes-ingress-5b9f87996f-j2c7r:
server SRV_1 10.2.10.68:6081 enabled
server SRV_2 10.2.11.44:6081 enabled
server SRV_3 10.2.11.89:6081 enabled
I haven't dig enough on the code but I bet there is no server order guarantee across instances
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
python3 ./script.py http://podip:podport
@ivanmatmati Do I need to pass pod ip and pod port of haproxy ingress ? Sorry but I don't understand your steps
I usually use minikube. I usally get ip of minikube cluster and add entry to my /etc/hosts 192.168.1.201 test.demoapp
Then I get the nodeport which are exposed my HAproxy by kubectl get svc --namespace=haproxy-controller with gives 80 port assinged something 30080 for example so my url becomes
http://test.demoapp:30080
How can I achieve the same thing in kind cluster ?
@ivanmatmati one more thing to consider when reproducing the problem. Make sure to scale HAproxy to 2 pods