inspired by handling 1M websockets connections in Go
- 1_simple_tcp_server: a 1m-connections server implemented based on 
goroutines per connection - 2_epoll_server: a 1m-connections server implemented based on 
epoll - 3_epoll_server_throughputs: add throughputs and latency test for 2_epoll_server
 - 4_epoll_client: 	implement the client based on 
epoll - 5_multiple_client: 	use 
multiple epollto manage connections in client - 6_multiple_server:  	use 
multiple epollto manage connections in server - 7_server_prefork: 	use 
preforkstyle of apache to implement server - 8_server_workerpool: use 
Reactorpattern to implement multiple event loops - 9_few_clients_high_throughputs: a simple 
goroutines per connectionserver for test throughtputs and latency - 10_io_intensive_epoll_server: an io-bound 
multiple epollserver - 11_io_intensive_goroutine:  an io-bound 
goroutines per connectionserver - 12_cpu_intensive_epoll_server: a cpu-bound 
multiple epollserver - 13_cpu_intensive_goroutine:  an cpu-bound 
goroutines per connectionserver 
- two 
E5-2630 V4cpus, total 20 cores, 40 logicial cores. - 32G memory
 
tune the linux:
sysctl -w fs.file-max=2000500
sysctl -w fs.nr_open=2000500
sysctl -w net.nf_conntrack_max=2000500
ulimit -n 2000500
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1client sends the next request only when it has received the response. it has not used the pipeline style to test.
| throughputs (tps) | latency | |
|---|---|---|
| goroutine-per-conn | 202830 | 4.9s | 
| single epoll(both server and client) | 42495 | 23s | 
| single epoll server | 42402 | 0.8s | 
| multiple epoll server | 197814 | 0.9s | 
| prefork | 444415 | 1.5s | 
| workerpool | 190022 | 0.3s | 
中文介绍: