UDPIFC got very low TPS in ic_bench
ic_bench with default args. tcp result:
+----------------+------------+ | Total time(s) | 100.000 | | Loop times | 48045111 | | LPS(l/ms) | 480.451 | | Recv mbs | 65430 | | TPS(mb/s) | 654.301 | | Recv counts | 336315777 | | Items ops/ms | 3363.157 | +----------------+------------+
udpifc result:
+----------------+------------+ | Total time(s) | 100.079 | | Loop times | 369104 | | LPS(l/ms) | 3.688 | | Recv mbs | 502 | | TPS(mb/s) | 5.023 | | Recv counts | 2583728 | | Items ops/ms | 25.817 | +----------------+------------+
If the relevant parameters are optimized.
bigger MTU
bigger buffer size
bigger queue_depth
The UDP will got 25mb/s ~ 35mb/s TPS. But for TCP, the result is still very low. But it not means that UDP implements is slower than TCP. According to my observation, the TPS required by motion is not a very high value. detail of exam as following:
// three segments
- CREATE table a(c1 int,c2 int);
- insert into a select generate_series(1,10000000),generate_series(1,10000000);
- \timing on
- select * from a;
The cdbmotion log is turned on. After running the above sql to get the result:
Interconnect seg-1 slice0 received from slice1: 10000000 tuples, 220005918 total bytes, 180004842 tuple bytes, 10000269 chunks. Time: 6353.575 ms (00:06.354) should not include psql print time
then got udp sender TPS:
220005918 / 1024 / 1024 / 3 = 69.937997818 mb/segment 69.937997818 / 6.354 ≈ 11.0069244284 mb/s - per segment
In this scenario, there may be some constraints such as disk. The results are for reference only.