chan icon indicating copy to clipboard operation
chan copied to clipboard

Missing speed test

Open je-so opened this issue 10 years ago • 8 comments

I written a test which compares the number of transfered messages per millisec to the number of threads. I'm using a similar test for the lock-free iqueue.

The file is located at https://github.com/je-so/testcode/blob/master/chan_speed_test.c. I'd appreciate it if you would integrate it.

je-so avatar Nov 11 '14 14:11 je-so

Thanks for this. I'm a little surprised to see the msg/ms so low, especially compared to running a similar benchmark using Go and its channels. My understanding is Go channels are not lock-free, so I would at least expect results to be within an order of magnitude but they appear to be several orders apart.

chan_t (C)

chan_t: 1_1000000 send/recv time in ms: 3149 (317 nr_of_msg/msec) chan_t: 2_1000000 send/recv time in ms: 14558 (137 nr_of_msg/msec) chan_t: 4_1000000 send/recv time in ms: 33221 (120 nr_of_msg/msec) chan_t: 8_1000000 send/recv time in ms: 70295 (113 nr_of_msg/msec) chan_t: 16*1000000 send/recv time in ms: 141924 (112 nr_of_msg/msec)

chan (Go)

chan: 1_1000000 send/recv time in ms: 54.913379 (18210.498392 nr_of_msg/msec) chan: 2_1000000 send/recv time in ms: 109.967315 (18187.222267 nr_of_msg/msec) chan: 4_1000000 send/recv time in ms: 232.021836 (17239.756693 nr_of_msg/msec) chan: 8_1000000 send/recv time in ms: 447.934368 (17859.759312 nr_of_msg/msec) chan: 16*1000000 send/recv time in ms: 885.989277 (18058.909307 nr_of_msg/msec)

Obviously it's not a completely equivalent comparison (e.g. goroutines vs pthreads). The Golang team has spent a lot of effort optimizing channels, but I was expecting chan_t to hold up better.

tylertreat avatar Nov 11 '14 16:11 tylertreat

Sorry, but I do not know Go. I've only read about it. My understanding is that Go routines are high speed user threads (aka fibers or coroutines). And if Go channels synchronize per default, only one slot per client is needed. Also if all routines run in only one thread then no locks are needed.

So it is possible (if my understanding is right) running the benchmark comes down to a simple function call – the clients (channel <- id) call into servers which are stored in a waiting list (lock-free cause of a single thread).

I've rewritten the benchmark to call a simple function instead of into the queue and the result is 500000 nr_of_msg/msec for a single thread. 2 threads transfer 1000000 nr_of_msg/msec and 8 threads transfer 2000000 (cause of quad core, does not scale beyond 4 threads).

And as you can see the Go benchmark does not scale up for more threads which suggests that only one system thread executes all 16 Go routines.

je-so avatar Nov 11 '14 17:11 je-so

Yeah, I think you're right. You can tell the Go scheduler to utilize more cores with:

runtime.GOMAXPROCS(runtime.NumCPU())

Doing that with a quad core system yields slightly higher latency, likely because it's no longer on a single thread.

chan: 1_1000000 send/recv time in ms: 88.191041 (11339.020253 nr_of_msg/msec) chan: 2_1000000 send/recv time in ms: 204.120321 (9798.142538 nr_of_msg/msec) chan: 4_1000000 send/recv time in ms: 561.135630 (7128.401381 nr_of_msg/msec) chan: 8_1000000 send/recv time in ms: 1117.243417 (7160.480767 nr_of_msg/msec) chan: 16*1000000 send/recv time in ms: 2235.774605 (7156.356443 nr_of_msg/msec)

tylertreat avatar Nov 11 '14 18:11 tylertreat

If written test code to see whether goroutines could be implemented the way we've speculated.

See https://github.com/je-so/testcode/blob/master/gochan.c

I've implemented only the single thread case and got more than 11000 msg/msec.

The implementation uses a gcc extension: take address of goto labels with &&LABEL and jump to label with goto*(addr).

je-so avatar Nov 12 '14 07:11 je-so

Now the test code (gochan.c) supports system threads. It scales very well:

gochan: 1_30000 send/recv time in ms: 1 (30000 msg/msec) gochan: 32_30000 send/recv time in ms: 46 (20869 msg/msec) gochan: 64_30000 send/recv time in ms: 84 (22857 msg/msec) gochan: 128_30000 send/recv time in ms: 153 (25098 msg/msec)

je-so avatar Nov 12 '14 18:11 je-so

There is a much better test driver in directory https://github.com/je-so/testcode/tree/master/iperf which scales linear with the number of scores.

Try it with chan if you want.

je-so avatar Nov 22 '14 20:11 je-so

With padding of the variables to the size of one cache line performance is much better!! See https://github.com/je-so/iqueue/blob/master/README.md for some numbers.

je-so avatar Dec 04 '14 14:12 je-so

Impressive, the padding has a pretty remarkable impact.

tylertreat avatar Dec 04 '14 15:12 tylertreat