1m-go-tcp-server
1m-go-tcp-server copied to clipboard
benchmarks for implementation of servers which support 1 million connections
Benchmark for implementation of servers that support 1m connections
inspired by handling 1M websockets connections in Go
Servers
-
1_simple_tcp_server: a 1m-connections server implemented based on
goroutines per connection
-
2_epoll_server: a 1m-connections server implemented based on
epoll
- 3_epoll_server_throughputs: add throughputs and latency test for 2_epoll_server
-
4_epoll_client: implement the client based on
epoll
-
5_multiple_client: use
multiple epoll
to manage connections in client -
6_multiple_server: use
multiple epoll
to manage connections in server -
7_server_prefork: use
prefork
style of apache to implement server -
8_server_workerpool: use
Reactor
pattern to implement multiple event loops -
9_few_clients_high_throughputs: a simple
goroutines per connection
server for test throughtputs and latency -
10_io_intensive_epoll_server: an io-bound
multiple epoll
server -
11_io_intensive_goroutine: an io-bound
goroutines per connection
server -
12_cpu_intensive_epoll_server: a cpu-bound
multiple epoll
server -
13_cpu_intensive_goroutine: an cpu-bound
goroutines per connection
server
Test Environment
- two
E5-2630 V4
cpus, total 20 cores, 40 logicial cores. - 32G memory
tune the linux:
sysctl -w fs.file-max=2000500
sysctl -w fs.nr_open=2000500
sysctl -w net.nf_conntrack_max=2000500
ulimit -n 2000500
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1
client sends the next request only when it has received the response. it has not used the pipeline
style to test.
Benchmarks
1m connections
throughputs (tps) | latency | |
---|---|---|
goroutine-per-conn | 202830 | 4.9s |
single epoll(both server and client) | 42495 | 23s |
single epoll server | 42402 | 0.8s |
multiple epoll server | 197814 | 0.9s |
prefork | 444415 | 1.5s |
workerpool | 190022 | 0.3s |
中文介绍: