contour
contour copied to clipboard
Test contour and public cloud ingress comparison
how to optimize contour? i use ab to test contour and elb.
result: contour:
Server Software: | envoy |
---|---|
Server Port | 80 |
Document Path | / |
Document Length | 612 bytes |
Concurrency Level | 10 |
Time taken for tests | 12.048 seconds |
Complete requests | 100 |
Failed requests | 0 |
Total transferred | 87200 bytes |
HTML transferred | 61200 bytes |
Requests per second | 8.30 |
Transfer rate | 7.07 kb/s received |
cloud ingress :
Server Software: | elb |
---|---|
Server Port | 80 |
Document Path | / |
Document Length | 612 bytes |
Concurrency Level | 10 |
Time taken for tests | 1.828 seconds |
Complete requests | 100 |
Failed requests | 0 |
Total transferred | 83600 bytes |
HTML transferred | 61200 bytes |
Requests per second | 54.70 |
Transfer rate | 44.66 kb/s received |
Environment:
- Contour version: 1.2.1
- Kubernetes version: 1.13:
I spent some time replicating this.
I spun up a 4 node cluster on GKE with this script. My backend service was this echo server, which ab
can drive at 32K RPS locally on my laptop. I did a variety of simple tests. When I run ab
, I am doing ab -k -c 4 -t -n 9999999
.
RPS | Description |
---|---|
9000 | Running both ab and echo server in the same container. Networking completely over loopback. |
3000 | Running ab inside the envoy container, so the first hop is loopback, the second hop is the cluster network. |
200 | Running ab from my laptop to the echo server running behind a GKE load balancer (TCP balancer). |
170 | Running ab from my laptop to the echo server running behind envoy. This goes to the GKE load balancer, to envoy, then across the cluster network. |
4 | If you force envoy to generate a 404 it seems incredibly slow. |
Note that this is all pretty quick & dirty and is only interesting for the comparison.
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack