udt icon indicating copy to clipboard operation
udt copied to clipboard

Performance question.

Open thomas-yuan opened this issue 9 years ago • 3 comments

I'm looking for sth like UDT, the original one has great performance based on my test. And I'm very glad to find this project base on asio. I try to test the performance, to see if it has similar performance as the original implementation. I think maybe it's not faire enough, just post what I did.

client/server don't have performance data, but the original udt has appclient/appserver which seems similar with client/server. So what I did is run server and appserver from my vps, and run appclient from local, all are Ubuntu 15.10, build by default, here is the result:

This result is appclient => server

SendRate(Mb/s)  RTT(ms) CWnd    PktSndPeriod(us)    RecvACK RecvNAK
24.1335     99.676  1324    137.306         3   0
90.6163     98.639  2243    136.911         3   0
90.558      97.731  2679    136.537         2   0
91.0242     96.272  2816    135.98          3   0
91.3385     95.342  3082    135.612         2   0
91.688      94.133  3073    135.062         3   0
91.6764     93.274  3961    171         3   2
52.8253     93.17   4247    395         1   10
15.5245     93.17   4247    805         0   9
14.5115     93.17   4247    805         0   0
14.4419     93.17   4247    805         0   0
14.4531     93.17   4247    805         0   0
9.29345     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
14.4409     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
7.73286     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
14.4409     93.17   4247    805         0   0
14.4526     93.17   4247    805         0   0
8.024       93.17   4247    805         0   0
14.4409     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
13.2181     93.17   4247    805         0   0
6.76626     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
13.7887     93.17   4247    805         0   0
9.47973     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
11.0756     93.17   4247    805         0   0
12.6015     93.17   4247    805         0   0
14.453      93.17   4247    805         0   0
14.4417     93.17   4247    805         0   0
9.42143     93.17   4247    805         0   0
14.4409     93.17   4247    805         0   0
14.4644     93.17   4247    805         0   0
14.4293     93.17   4247    805         0   0
8.89791     93.17   4247    805         0   0
14.4411     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0
14.4525     93.17   4247    805         0   0

This is appclient => appserver

SendRate(Mb/s)  RTT(ms) CWnd    PktSndPeriod(us)    RecvACK RecvNAK
34.4614     90.939  2101    1           42  0
312.943     104.957 105764  13          107 17
566.696     98.397  105163  14.5492         298 3
336.159     103.437 108948  30          216 9
515.817     104.418 112849  17.9674         495 10
532.556     93.949  103530  23.4924         445 5
453.818     108.273 115857  14          491 32
553.036     97.286  104946  15.2761         243 9
527.182     96.964  106919  21          325 8
370.735     95.995  105952  20.0181         235 1
572.541     104.304 112683  20          525 11
378.541     95.557  95754   19.3174         184 3
512.555     97.808  106675  15.4918         447 6
557.422     97.282  106693  20.2108         370 9
606.808     98.572  104215  17          496 11
644.919     104.724 108863  16          570 8
536.526     96.48   106222  16.1085         466 7
609.325     96.104  95083   18          511 7
654.918     109.585 118807  14          585 8
512.403     98.308  101933  19.1569         256 6
636.51      106.445 104975  15          591 5
572.702     105.457 114654  18          387 11
319.632     94.019  96389   30.0678         100 5
252.778     92.01   101011  31.8469         137 4
233.593     96.406  98043   23          210 3
431.089     96.021  105243  17.4637         251 4
528.022     101.97  109578  16.2454         330 7
612.842     108.271 117648  13          502 6
427.218     92.732  102354  37          199 9
509.328     96.64   95778   16.8195         390 3
502.551     95.952  105901  19.3726         373 11
616.464     105.089 103671  12.2732         504 7
598.754     108.013 113997  13          435 4
517.938     95.275  98832   21          255 8
465.443     95.221  101348  27          361 7
221.902     92.388  102005  49          67  9
229.576     89.751  95617   53.6216         193 5
478.182     99.121  108034  20          367 4
549.114     101.459 112186  16.175          465 4
405.513     92.486  101180  46          258 10
439.703     92.998  102394  26.4527         404 36
718.899     109.115 117012  9.02486         821 0
653.231     105.689 112807  9.433           581 6
571.427     108.307 114919  18          365 6
556.381     96.216  103107  18.9554         534 3
328.657     96.151  106017  22.0423         165 5
725.314     104.714 115571  9.10074         765 1
776.852     109.384 118583  7.22473         842 1

The original implementation performance is much better.

And when I run the same test second time without restart server side, the result is unbelievable. appclient => server

SendRate(Mb/s)  RTT(ms) CWnd    PktSndPeriod(us)    RecvACK RecvNAK
0.28588     97.857  646 55555.6         2   0
0.244557        95.544  429 47619           3   0
0.291147        93.994  289 41666.7         3   0
0.326084        93.24   224 38461.5         2   0
0.337731        92.351  155 34482.8         3   0
0.384314        91.786  109 31250           3   0
0.419252        91.522  88  29411.8         2   0
0.454189        91.209  65  27027           3   0
0.465835        90.956  50  25000           3   0
0.512433        90.817  43  23809.5         2   0
0.547393        90.661  35  22222.2         3   0
0.5823      90.584  30  20833.3         3   0
0.59394     90.559  28  20000           2   0
0.652193        90.558  25  18867.9         3   0
0.675463        90.519  24  17857.1         3   0
0.687105        90.479  23  17241.4         2   0
0.722044        90.322  22  16393.4         3   0
0.698753        90.254  22  15625           3   0

appclient => appserver

SendRate(Mb/s)  RTT(ms) CWnd    PktSndPeriod(us)    RecvACK RecvNAK
16.768      91.043  975 1           27  0
246.607     92.939  93489   6.99483         73  7
462.314     90.552  99320   12.8825         119 4
381.681     90.52   94368   22.7385         217 10
485.299     90.202  99519   38.5489         302 13
464.906     91.307  100365  15.3132         443 3
419.472     90.379  85311   57          150 81
202.348     89.526  92842   30.2625         133 0
481.028     89.825  89443   26.1014         378 7
649.977     92.387  96653   30          607 24
530.592     92.673  102103  26          415 7
439.689     89.936  95234   28          313 4
498.335     91.492  100308  20          481 8
146.068     91.828  101302  25          18  9
499.547     90.614  90035   22          415 4
507.244     91.391  96509   17          479 6
328.391     91.317  99491   20.1534         140 12

Should I suppose the performance will be better if we change the appclient to client? (As I said I can't test this since client doesn't have similar performance logs). Is it difficult to add performance logs to client?

Actually I modified client/server to send a file, the performance is not good too, say 1/10 of the original one.

Do we have any plan on performance tuning?

thomas-yuan avatar Nov 17 '15 20:11 thomas-yuan

Hello,

Thanks for your feedback. It is really appreciated. Improving performance is indeed a priority for this project.

We had a workaround with timers since boost asio timers are not reliable with short duration. And we are still trying to figure it out what causes this performance drop but it may take some times since we are doing this on our free time. There may be a protocol implementation difference which causes this behaviour. Our library is still young and the protocol documentation is not as clear as expected on some element (ack, light ack for example). We had to dig in the original source code to determine some behaviour.

client/server don't have performance data

You can activate some logging on client or server by changing a template parameter on the protocol.

using udt_protocol = ip::udt<connected_protocol::logger::FileLog<1000>>;

This will log the internal protocol variables each seconds in a file name session_log*.log and the result can be parsed with a python script available in the tools directory.

securesocketfunneling avatar Dec 09 '15 13:12 securesocketfunneling

@securesocketfunneling @thomas-yuan is this still an outstanding issue?

dimitry-ishenko avatar Mar 04 '17 01:03 dimitry-ishenko

any update on this issue ...

peererror avatar Sep 02 '17 07:09 peererror