dice
dice copied to clipboard
server: add unix socket capability
DiceDB now supports tcp://, dice:// and unix:// variant addresses
This supersedes https://github.com/DiceDB/dice/pull/540
NOTE: The implementation contains bug that the unix socket file isn't deleted after server is closed. Will be fixing soon
cc @lucifercr07
@lucifercr07 I have only rebased. Will fix the tests shortly
@codeasashu thanks for raising this PR. Can you please explain the rational for adding support for Unix domain sockets? Please let me know if we can have a quick chat about this.
@codeasashu let me know when we can discuss on the same.
@soumya-codes Actually, before discussing, I wanted to bring in some data, proving my hopethesis by some benchmarks for a local cluster variant. However, I see since the introduction of asyncserver, the unix sockets have to be reimplemented. Give me 1 day to sort this out and lets have a discussion then? What do you say?
@soumya-codes I have some data:
1. Stats
Unix Sockets:
Cmd: memtier_benchmark -S /tmp/dice.sock --clients 30 --threads 10 --ratio=1:1 --data-size 256 --requests=100000
Result:
10 Threads
30 Connections per thread
100000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 136474.64 --- --- 1.09909 0.96700 2.20700 3.11900 40501.28
Gets 136474.64 363.02 136111.61 1.09899 0.96700 2.20700 3.11900 5408.30
Waits 0.00 --- --- --- --- --- --- ---
Totals 272949.27 363.02 136111.61 1.09904 0.96700 2.20700 3.11900 45909.57
TCP sockets:
Cmd: memtier_benchmark -s localhost -p 7379 --clients 30 --threads 10 --ratio=1:1 --data-size 256 --requests=100000
Result:
10 Threads
30 Connections per thread
100000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 152848.85 --- --- 1.03532 0.92700 2.46300 6.94300 45360.62
Gets 152848.85 426.39 152422.46 1.03344 0.92700 2.44700 6.97500 6062.19
Waits 0.00 --- --- --- --- --- --- ---
Totals 305697.70 426.39 152422.46 1.03438 0.92700 2.46300 6.97500 51422.81
2. Latency Distributions
Unix sockets:
Request Latency Distribution
Request Latency Distribution
Type <= msec Percent
------------------------------------------------------------------------
SET 0.015 0.000
SET 0.855 5.000
SET 0.871 10.000
SET 0.879 15.000
SET 0.887 20.000
SET 0.903 25.000
SET 0.911 30.000
SET 0.927 35.000
SET 0.935 40.000
SET 0.951 45.000
SET 0.967 50.000
SET 0.975 52.500
SET 0.983 55.000
SET 0.991 57.500
SET 1.007 60.000
SET 1.015 62.500
SET 1.023 65.000
SET 1.039 67.500
SET 1.063 70.000
SET 1.087 72.500
SET 1.143 75.000
SET 1.175 76.250
SET 1.223 77.500
SET 1.263 78.750
SET 1.311 80.000
SET 1.367 81.250
SET 1.407 82.500
SET 1.439 83.750
SET 1.471 85.000
SET 1.503 86.250
SET 1.543 87.500
SET 1.559 88.125
SET 1.575 88.750
SET 1.591 89.375
SET 1.615 90.000
SET 1.631 90.625
SET 1.647 91.250
SET 1.671 91.875
SET 1.687 92.500
SET 1.711 93.125
SET 1.735 93.750
SET 1.751 94.062
SET 1.767 94.375
SET 1.783 94.688
SET 1.799 95.000
SET 1.815 95.312
SET 1.831 95.625
SET 1.847 95.938
SET 1.871 96.250
SET 1.895 96.562
SET 1.919 96.875
SET 1.927 97.031
SET 1.943 97.188
SET 1.959 97.344
SET 1.975 97.500
SET 1.999 97.656
SET 2.015 97.812
SET 2.031 97.969
SET 2.063 98.125
SET 2.079 98.281
SET 2.095 98.438
SET 2.111 98.516
SET 2.127 98.594
SET 2.143 98.672
SET 2.159 98.750
SET 2.175 98.828
SET 2.191 98.906
SET 2.207 98.984
SET 2.223 99.062
SET 2.239 99.141
SET 2.255 99.219
SET 2.271 99.258
SET 2.287 99.297
SET 2.303 99.336
SET 2.303 99.375
SET 2.319 99.414
SET 2.335 99.453
SET 2.351 99.492
SET 2.367 99.531
SET 2.383 99.570
SET 2.415 99.609
SET 2.415 99.629
SET 2.431 99.648
SET 2.447 99.668
SET 2.463 99.688
SET 2.495 99.707
SET 2.511 99.727
SET 2.543 99.746
SET 2.575 99.766
SET 2.623 99.785
SET 2.687 99.805
SET 2.719 99.814
SET 2.751 99.824
SET 2.783 99.834
SET 2.815 99.844
SET 2.847 99.854
SET 2.895 99.863
SET 2.943 99.873
SET 3.007 99.883
SET 3.071 99.893
SET 3.151 99.902
SET 3.183 99.907
SET 3.231 99.912
SET 3.279 99.917
SET 3.327 99.922
SET 3.391 99.927
SET 3.455 99.932
SET 3.519 99.937
SET 3.599 99.941
SET 3.679 99.946
SET 3.759 99.951
SET 3.791 99.954
SET 3.839 99.956
SET 3.887 99.958
SET 3.935 99.961
SET 3.983 99.963
SET 4.031 99.966
SET 4.095 99.968
SET 4.159 99.971
SET 4.255 99.973
SET 4.351 99.976
SET 4.383 99.977
SET 4.447 99.978
SET 4.479 99.979
SET 4.543 99.980
SET 4.607 99.982
SET 4.671 99.983
SET 4.735 99.984
SET 4.799 99.985
SET 4.863 99.987
SET 4.959 99.988
SET 4.991 99.988
SET 5.023 99.989
SET 5.087 99.990
SET 5.151 99.990
SET 5.215 99.991
SET 5.279 99.991
SET 5.343 99.992
SET 5.407 99.993
SET 5.471 99.993
SET 5.567 99.994
SET 5.599 99.994
SET 5.663 99.995
SET 5.695 99.995
SET 5.791 99.995
SET 5.855 99.995
SET 5.951 99.996
SET 5.951 99.996
SET 6.015 99.996
SET 6.143 99.997
SET 6.207 99.997
SET 6.303 99.997
SET 6.399 99.997
SET 6.495 99.997
SET 6.687 99.998
SET 6.879 99.998
SET 7.231 99.998
SET 7.711 99.998
SET 8.159 99.998
SET 8.447 99.998
SET 8.703 99.998
SET 8.895 99.999
SET 9.663 99.999
SET 9.919 99.999
SET 10.111 99.999
SET 10.111 99.999
SET 10.175 99.999
SET 10.239 99.999
SET 10.303 99.999
SET 10.303 99.999
SET 10.367 99.999
SET 10.431 99.999
SET 10.431 99.999
SET 10.431 99.999
SET 10.431 99.999
SET 10.431 99.999
SET 10.495 99.999
SET 10.495 100.000
SET 10.559 100.000
SET 11.263 100.000
SET 11.391 100.000
SET 11.455 100.000
SET 11.519 100.000
SET 11.519 100.000
SET 11.583 100.000
SET 11.583 100.000
SET 11.583 100.000
SET 11.583 100.000
SET 11.583 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.647 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 11.711 100.000
SET 12.927 100.000
SET 12.927 100.000
---
GET 0.015 0.000
GET 0.855 5.000
GET 0.871 10.000
GET 0.879 15.000
GET 0.887 20.000
GET 0.903 25.000
GET 0.911 30.000
GET 0.927 35.000
GET 0.935 40.000
GET 0.951 45.000
GET 0.967 50.000
GET 0.975 52.500
GET 0.983 55.000
GET 0.991 57.500
GET 1.007 60.000
GET 1.015 62.500
GET 1.023 65.000
GET 1.039 67.500
GET 1.063 70.000
GET 1.087 72.500
GET 1.143 75.000
GET 1.175 76.250
GET 1.223 77.500
GET 1.263 78.750
GET 1.311 80.000
GET 1.367 81.250
GET 1.407 82.500
GET 1.439 83.750
GET 1.471 85.000
GET 1.503 86.250
GET 1.543 87.500
GET 1.559 88.125
GET 1.575 88.750
GET 1.591 89.375
GET 1.615 90.000
GET 1.631 90.625
GET 1.647 91.250
GET 1.671 91.875
GET 1.687 92.500
GET 1.711 93.125
GET 1.735 93.750
GET 1.751 94.062
GET 1.767 94.375
GET 1.783 94.688
GET 1.799 95.000
GET 1.815 95.312
GET 1.831 95.625
GET 1.847 95.938
GET 1.871 96.250
GET 1.887 96.562
GET 1.911 96.875
GET 1.927 97.031
GET 1.943 97.188
GET 1.959 97.344
GET 1.975 97.500
GET 1.991 97.656
GET 2.015 97.812
GET 2.031 97.969
GET 2.063 98.125
GET 2.079 98.281
GET 2.095 98.438
GET 2.111 98.516
GET 2.127 98.594
GET 2.127 98.672
GET 2.143 98.750
GET 2.175 98.828
GET 2.191 98.906
GET 2.207 98.984
GET 2.223 99.062
GET 2.239 99.141
GET 2.255 99.219
GET 2.271 99.258
GET 2.287 99.297
GET 2.287 99.336
GET 2.303 99.375
GET 2.319 99.414
GET 2.335 99.453
GET 2.351 99.492
GET 2.367 99.531
GET 2.383 99.570
GET 2.415 99.609
GET 2.415 99.629
GET 2.431 99.648
GET 2.447 99.668
GET 2.463 99.688
GET 2.479 99.707
GET 2.511 99.727
GET 2.543 99.746
GET 2.575 99.766
GET 2.623 99.785
GET 2.687 99.805
GET 2.719 99.814
GET 2.751 99.824
GET 2.767 99.834
GET 2.799 99.844
GET 2.847 99.854
GET 2.879 99.863
GET 2.943 99.873
GET 3.007 99.883
GET 3.055 99.893
GET 3.135 99.902
GET 3.167 99.907
GET 3.215 99.912
GET 3.263 99.917
GET 3.327 99.922
GET 3.391 99.927
GET 3.439 99.932
GET 3.503 99.937
GET 3.583 99.941
GET 3.663 99.946
GET 3.759 99.951
GET 3.791 99.954
GET 3.839 99.956
GET 3.871 99.958
GET 3.919 99.961
GET 3.967 99.963
GET 4.031 99.966
GET 4.095 99.968
GET 4.159 99.971
GET 4.255 99.973
GET 4.319 99.976
GET 4.383 99.977
GET 4.447 99.978
GET 4.479 99.979
GET 4.543 99.980
GET 4.607 99.982
GET 4.671 99.983
GET 4.703 99.984
GET 4.767 99.985
GET 4.863 99.987
GET 4.959 99.988
GET 4.991 99.988
GET 5.023 99.989
GET 5.087 99.990
GET 5.119 99.990
GET 5.183 99.991
GET 5.247 99.991
GET 5.311 99.992
GET 5.375 99.993
GET 5.471 99.993
GET 5.535 99.994
GET 5.599 99.994
GET 5.631 99.995
GET 5.695 99.995
GET 5.759 99.995
GET 5.791 99.995
GET 5.919 99.996
GET 5.951 99.996
GET 5.983 99.996
GET 6.111 99.997
GET 6.207 99.997
GET 6.239 99.997
GET 6.335 99.997
GET 6.399 99.997
GET 6.495 99.998
GET 6.623 99.998
GET 6.815 99.998
GET 7.135 99.998
GET 7.647 99.998
GET 8.063 99.998
GET 8.447 99.998
GET 8.575 99.999
GET 8.703 99.999
GET 8.831 99.999
GET 9.279 99.999
GET 9.919 99.999
GET 10.111 99.999
GET 10.175 99.999
GET 10.175 99.999
GET 10.303 99.999
GET 10.303 99.999
GET 10.367 99.999
GET 10.367 99.999
GET 10.367 99.999
GET 10.431 99.999
GET 10.431 99.999
GET 10.431 99.999
GET 10.495 100.000
GET 10.495 100.000
GET 10.559 100.000
GET 10.687 100.000
GET 11.263 100.000
GET 11.263 100.000
GET 11.327 100.000
GET 11.327 100.000
GET 11.455 100.000
GET 11.455 100.000
GET 11.519 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.583 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.647 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 11.711 100.000
GET 12.927 100.000
GET 12.927 100.000
---
WAIT 0.000 100.000
TCP sockets:
Request Latency Distribution
Request Latency Distribution
Type <= msec Percent
------------------------------------------------------------------------
SET 0.015 0.000
SET 0.759 5.000
SET 0.783 10.000
SET 0.807 15.000
SET 0.831 20.000
SET 0.855 25.000
SET 0.871 30.000
SET 0.887 35.000
SET 0.903 40.000
SET 0.919 45.000
SET 0.927 50.000
SET 0.935 52.500
SET 0.943 55.000
SET 0.951 57.500
SET 0.959 60.000
SET 0.967 62.500
SET 0.975 65.000
SET 0.983 67.500
SET 0.991 70.000
SET 1.007 72.500
SET 1.023 75.000
SET 1.031 76.250
SET 1.047 77.500
SET 1.055 78.750
SET 1.079 80.000
SET 1.103 81.250
SET 1.127 82.500
SET 1.167 83.750
SET 1.215 85.000
SET 1.263 86.250
SET 1.311 87.500
SET 1.335 88.125
SET 1.367 88.750
SET 1.399 89.375
SET 1.431 90.000
SET 1.463 90.625
SET 1.495 91.250
SET 1.519 91.875
SET 1.551 92.500
SET 1.583 93.125
SET 1.623 93.750
SET 1.639 94.062
SET 1.663 94.375
SET 1.687 94.688
SET 1.711 95.000
SET 1.735 95.312
SET 1.759 95.625
SET 1.791 95.938
SET 1.815 96.250
SET 1.839 96.562
SET 1.871 96.875
SET 1.879 97.031
SET 1.895 97.188
SET 1.911 97.344
SET 1.935 97.500
SET 1.951 97.656
SET 1.975 97.812
SET 2.007 97.969
SET 2.047 98.125
SET 2.095 98.281
SET 2.143 98.438
SET 2.159 98.516
SET 2.191 98.594
SET 2.223 98.672
SET 2.271 98.750
SET 2.319 98.828
SET 2.383 98.906
SET 2.447 98.984
SET 2.527 99.062
SET 2.623 99.141
SET 2.719 99.219
SET 2.767 99.258
SET 2.831 99.297
SET 2.879 99.336
SET 2.943 99.375
SET 3.007 99.414
SET 3.071 99.453
SET 3.151 99.492
SET 3.247 99.531
SET 3.375 99.570
SET 3.503 99.609
SET 3.583 99.629
SET 3.679 99.648
SET 3.775 99.668
SET 3.871 99.688
SET 3.999 99.707
SET 4.159 99.727
SET 4.319 99.746
SET 4.511 99.766
SET 4.703 99.785
SET 4.927 99.805
SET 5.023 99.814
SET 5.183 99.824
SET 5.343 99.834
SET 5.503 99.844
SET 5.759 99.854
SET 5.983 99.863
SET 6.207 99.873
SET 6.431 99.883
SET 6.687 99.893
SET 7.007 99.902
SET 7.199 99.907
SET 7.359 99.912
SET 7.551 99.917
SET 7.775 99.922
SET 8.127 99.927
SET 8.447 99.932
SET 8.831 99.937
SET 9.279 99.941
SET 9.727 99.946
SET 10.367 99.951
SET 10.751 99.954
SET 11.199 99.956
SET 11.455 99.958
SET 11.903 99.961
SET 12.415 99.963
SET 12.927 99.966
SET 13.567 99.968
SET 14.335 99.971
SET 15.167 99.973
SET 15.999 99.976
SET 16.511 99.977
SET 17.151 99.978
SET 17.919 99.979
SET 19.199 99.980
SET 20.223 99.982
SET 20.991 99.983
SET 22.527 99.984
SET 23.295 99.985
SET 25.087 99.987
SET 27.519 99.988
SET 28.159 99.988
SET 28.671 99.989
SET 30.079 99.990
SET 31.359 99.990
SET 32.639 99.991
SET 34.815 99.991
SET 38.143 99.992
SET 41.727 99.993
SET 45.311 99.993
SET 49.919 99.994
SET 53.503 99.994
SET 55.039 99.995
SET 56.063 99.995
SET 58.367 99.995
SET 59.391 99.995
SET 63.743 99.996
SET 66.559 99.996
SET 68.095 99.996
SET 69.119 99.997
SET 69.631 99.997
SET 70.143 99.997
SET 71.167 99.997
SET 73.215 99.997
SET 74.751 99.998
SET 75.263 99.998
SET 77.823 99.998
SET 78.847 99.998
SET 80.895 99.998
SET 82.943 99.998
SET 83.967 99.998
SET 87.551 99.999
SET 90.111 99.999
SET 91.135 99.999
SET 92.671 99.999
SET 93.695 99.999
SET 94.719 99.999
SET 95.743 99.999
SET 98.303 99.999
SET 101.887 99.999
SET 101.887 99.999
SET 102.399 99.999
SET 102.911 99.999
SET 103.423 99.999
SET 104.447 99.999
SET 106.495 99.999
SET 107.519 99.999
SET 108.031 100.000
SET 109.055 100.000
SET 109.567 100.000
SET 110.079 100.000
SET 110.079 100.000
SET 111.103 100.000
SET 112.639 100.000
SET 113.663 100.000
SET 113.663 100.000
SET 114.175 100.000
SET 114.687 100.000
SET 115.199 100.000
SET 115.711 100.000
SET 116.223 100.000
SET 118.271 100.000
SET 119.807 100.000
SET 119.807 100.000
SET 123.903 100.000
SET 124.927 100.000
SET 124.927 100.000
SET 124.927 100.000
SET 125.439 100.000
SET 138.239 100.000
SET 138.239 100.000
SET 145.407 100.000
SET 145.407 100.000
SET 145.407 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 151.551 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 158.719 100.000
SET 159.743 100.000
SET 159.743 100.000
---
GET 0.023 0.000
GET 0.759 5.000
GET 0.783 10.000
GET 0.807 15.000
GET 0.831 20.000
GET 0.847 25.000
GET 0.871 30.000
GET 0.887 35.000
GET 0.903 40.000
GET 0.919 45.000
GET 0.927 50.000
GET 0.935 52.500
GET 0.943 55.000
GET 0.951 57.500
GET 0.959 60.000
GET 0.967 62.500
GET 0.975 65.000
GET 0.983 67.500
GET 0.991 70.000
GET 1.007 72.500
GET 1.023 75.000
GET 1.031 76.250
GET 1.039 77.500
GET 1.055 78.750
GET 1.071 80.000
GET 1.095 81.250
GET 1.127 82.500
GET 1.167 83.750
GET 1.207 85.000
GET 1.255 86.250
GET 1.303 87.500
GET 1.335 88.125
GET 1.359 88.750
GET 1.391 89.375
GET 1.423 90.000
GET 1.455 90.625
GET 1.487 91.250
GET 1.519 91.875
GET 1.551 92.500
GET 1.583 93.125
GET 1.615 93.750
GET 1.639 94.062
GET 1.655 94.375
GET 1.679 94.688
GET 1.703 95.000
GET 1.727 95.312
GET 1.759 95.625
GET 1.783 95.938
GET 1.807 96.250
GET 1.839 96.562
GET 1.863 96.875
GET 1.879 97.031
GET 1.895 97.188
GET 1.911 97.344
GET 1.927 97.500
GET 1.951 97.656
GET 1.975 97.812
GET 2.007 97.969
GET 2.039 98.125
GET 2.095 98.281
GET 2.143 98.438
GET 2.159 98.516
GET 2.191 98.594
GET 2.223 98.672
GET 2.255 98.750
GET 2.303 98.828
GET 2.367 98.906
GET 2.431 98.984
GET 2.511 99.062
GET 2.607 99.141
GET 2.703 99.219
GET 2.767 99.258
GET 2.815 99.297
GET 2.863 99.336
GET 2.927 99.375
GET 2.991 99.414
GET 3.071 99.453
GET 3.151 99.492
GET 3.247 99.531
GET 3.359 99.570
GET 3.503 99.609
GET 3.567 99.629
GET 3.663 99.648
GET 3.759 99.668
GET 3.871 99.688
GET 3.999 99.707
GET 4.159 99.727
GET 4.287 99.746
GET 4.479 99.766
GET 4.671 99.785
GET 4.895 99.805
GET 5.023 99.814
GET 5.151 99.824
GET 5.311 99.834
GET 5.503 99.844
GET 5.727 99.854
GET 5.951 99.863
GET 6.175 99.873
GET 6.399 99.883
GET 6.687 99.893
GET 7.039 99.902
GET 7.231 99.907
GET 7.391 99.912
GET 7.583 99.917
GET 7.807 99.922
GET 8.095 99.927
GET 8.447 99.932
GET 8.767 99.937
GET 9.279 99.941
GET 9.727 99.946
GET 10.303 99.951
GET 10.751 99.954
GET 11.135 99.956
GET 11.455 99.958
GET 11.903 99.961
GET 12.415 99.963
GET 13.055 99.966
GET 13.631 99.968
GET 14.271 99.971
GET 15.103 99.973
GET 15.935 99.976
GET 16.255 99.977
GET 16.895 99.978
GET 17.407 99.979
GET 18.431 99.980
GET 19.839 99.982
GET 20.479 99.983
GET 22.015 99.984
GET 22.783 99.985
GET 23.935 99.987
GET 25.727 99.988
GET 26.751 99.988
GET 28.159 99.989
GET 29.055 99.990
GET 30.335 99.990
GET 31.743 99.991
GET 33.279 99.991
GET 36.607 99.992
GET 39.679 99.993
GET 42.751 99.993
GET 47.871 99.994
GET 49.663 99.994
GET 52.479 99.995
GET 54.527 99.995
GET 55.295 99.995
GET 57.343 99.995
GET 58.623 99.996
GET 60.159 99.996
GET 65.023 99.996
GET 67.071 99.997
GET 68.607 99.997
GET 69.119 99.997
GET 69.119 99.997
GET 69.631 99.997
GET 70.655 99.998
GET 71.167 99.998
GET 73.215 99.998
GET 75.775 99.998
GET 77.311 99.998
GET 78.847 99.998
GET 81.919 99.998
GET 84.479 99.999
GET 87.551 99.999
GET 89.599 99.999
GET 91.135 99.999
GET 92.671 99.999
GET 93.695 99.999
GET 94.719 99.999
GET 95.231 99.999
GET 96.767 99.999
GET 99.327 99.999
GET 101.887 99.999
GET 101.887 99.999
GET 102.911 99.999
GET 103.423 99.999
GET 104.447 99.999
GET 105.471 99.999
GET 106.495 100.000
GET 106.495 100.000
GET 107.519 100.000
GET 109.055 100.000
GET 109.055 100.000
GET 109.567 100.000
GET 109.567 100.000
GET 113.151 100.000
GET 114.175 100.000
GET 117.759 100.000
GET 121.343 100.000
GET 122.367 100.000
GET 124.415 100.000
GET 124.927 100.000
GET 124.927 100.000
GET 125.439 100.000
GET 125.439 100.000
GET 125.439 100.000
GET 128.511 100.000
GET 128.511 100.000
GET 129.023 100.000
GET 134.143 100.000
GET 136.191 100.000
GET 136.191 100.000
GET 136.191 100.000
GET 136.191 100.000
GET 136.191 100.000
GET 137.215 100.000
GET 137.215 100.000
GET 137.215 100.000
GET 138.239 100.000
GET 138.239 100.000
GET 139.263 100.000
GET 139.263 100.000
GET 139.263 100.000
GET 139.263 100.000
GET 139.263 100.000
GET 150.527 100.000
GET 150.527 100.000
GET 150.527 100.000
GET 150.527 100.000
GET 150.527 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 151.551 100.000
GET 158.719 100.000
GET 158.719 100.000
---
WAIT 0.000 100.000
@soumya-codes Few conclusions from above benches:
- unix sockets have lesser misses compared to TCP
- unix sockets outperforms TCP for larger percentiles
- I have not super-optimised unix sockets for now, it can even be better
@soumya-codes For some reason, The TCP tests are only running at 50% capacity, compared to unix socket ones. I missed to note that out earlier, but I noticed a lot of Connection error: Connection refused errors during TCP tests. Here is some of the output:
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
[RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latency
10 Threads
10 Connections per thread
1000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 164327.74 --- --- 0.31298 0.28700 0.79100 1.00700 48763.29
Gets 164327.74 0.00 164327.74 0.31325 0.28700 0.77500 0.95900 6401.08
Waits 0.00 --- --- --- --- --- --- ---
Totals 328655.47 0.00 164327.74 0.31311 0.28700 0.78300 0.97500 55164.37
Hence, the unix socket tests are actually running at double capacity than the TCP one.
I decided to have a run again, this time making unix socket tests to run at half the capacity of TCP. I made the number smaller since my machine can't do enough rn. Here are the results:
1. Unix socket:
Cmd: memtier_benchmark -S /tmp/dice.sock --clients 5 --threads 10 --ratio=1:1 --data-size 256 --requests=1000
Output:
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 0 secs] 0 threads: 50000 ops, 0 (avg: 285872) ops/sec, 0.00KB/sec (avg: 46.86MB/sec), 0.00 (avg: 0.17) msec latency
10 Threads
5 Connections per thread
1000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 140943.53 --- --- 0.17463 0.15100 0.50300 0.95100 41824.17
Gets 140943.53 0.00 140943.53 0.17552 0.15100 0.51100 0.91100 5490.19
Waits 0.00 --- --- --- --- --- --- ---
Totals 281887.06 0.00 140943.53 0.17508 0.15100 0.50300 0.91100 47314.36
2. TCP
Cmd: memtier_benchmark -s localhost -p 7379 --clients 10 --threads 10 --ratio=1:1 --data-size 256 --requests=1000
Outcome:
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
[RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latency
10 Threads
10 Connections per thread
1000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 164327.74 --- --- 0.31298 0.28700 0.79100 1.00700 48763.29
Gets 164327.74 0.00 164327.74 0.31325 0.28700 0.77500 0.95900 6401.08
Waits 0.00 --- --- --- --- --- --- ---
Totals 328655.47 0.00 164327.74 0.31311 0.28700 0.78300 0.97500 55164.37
3. Conclusion
The latency distribution of unix socket bench seems half of the TCP one, as expected. However, if you notice the RUN part of both, they put the same pressure on the server:
TCP:
[RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latency
Unix Sockets:
[RUN #1 100%, 0 secs] 0 threads: 50000 ops, 0 (avg: 285872) ops/sec, 0.00KB/sec (avg: 46.86MB/sec), 0.00 (avg: 0.17) msec latency
What do you think?
@soumya-codes For some reason, The TCP tests are only running at 50% capacity, compared to unix socket ones. I missed to note that out earlier, but I noticed a lot of
Connection error: Connection refusederrors during TCP tests. Here is some of the output:Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused [RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latency 10 Threads 10 Connections per thread 1000 Requests per client ALL STATS ============================================================================================================================ Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec ---------------------------------------------------------------------------------------------------------------------------- Sets 164327.74 --- --- 0.31298 0.28700 0.79100 1.00700 48763.29 Gets 164327.74 0.00 164327.74 0.31325 0.28700 0.77500 0.95900 6401.08 Waits 0.00 --- --- --- --- --- --- --- Totals 328655.47 0.00 164327.74 0.31311 0.28700 0.78300 0.97500 55164.37Hence, the unix socket tests are actually running at double capacity than the TCP one.
I decided to have a run again, this time making unix socket tests to run at half the capacity of TCP. I made the number smaller since my machine can't do enough rn. Here are the results:
1. Unix socket:
Cmd:
memtier_benchmark -S /tmp/dice.sock --clients 5 --threads 10 --ratio=1:1 --data-size 256 --requests=1000Output:Writing results to stdout [RUN #1] Preparing benchmark client... [RUN #1] Launching threads now... [RUN #1 100%, 0 secs] 0 threads: 50000 ops, 0 (avg: 285872) ops/sec, 0.00KB/sec (avg: 46.86MB/sec), 0.00 (avg: 0.17) msec latency 10 Threads 5 Connections per thread 1000 Requests per client ALL STATS ============================================================================================================================ Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec ---------------------------------------------------------------------------------------------------------------------------- Sets 140943.53 --- --- 0.17463 0.15100 0.50300 0.95100 41824.17 Gets 140943.53 0.00 140943.53 0.17552 0.15100 0.51100 0.91100 5490.19 Waits 0.00 --- --- --- --- --- --- --- Totals 281887.06 0.00 140943.53 0.17508 0.15100 0.50300 0.91100 47314.362. TCP
Cmd:
memtier_benchmark -s localhost -p 7379 --clients 10 --threads 10 --ratio=1:1 --data-size 256 --requests=1000Outcome:
Writing results to stdout [RUN #1] Preparing benchmark client... [RUN #1] Launching threads now... Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused Connection error: Connection refused [RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latency 10 Threads 10 Connections per thread 1000 Requests per client ALL STATS ============================================================================================================================ Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec ---------------------------------------------------------------------------------------------------------------------------- Sets 164327.74 --- --- 0.31298 0.28700 0.79100 1.00700 48763.29 Gets 164327.74 0.00 164327.74 0.31325 0.28700 0.77500 0.95900 6401.08 Waits 0.00 --- --- --- --- --- --- --- Totals 328655.47 0.00 164327.74 0.31311 0.28700 0.78300 0.97500 55164.373. Conclusion
The latency distribution of unix socket bench seems half of the TCP one, as expected. However, if you notice the
RUNpart of both, they put the same pressure on the server:TCP:
[RUN #1 50%, 0 secs] 0 threads: 50000 ops, 0 (avg: 319458) ops/sec, 0.00KB/sec (avg: 52.36MB/sec), 0.00 (avg: 0.31) msec latencyUnix Sockets:
[RUN #1 100%, 0 secs] 0 threads: 50000 ops, 0 (avg: 285872) ops/sec, 0.00KB/sec (avg: 46.86MB/sec), 0.00 (avg: 0.17) msec latencyWhat do you think?
@codeasashu I see you are running 10 clients each running 10 threads in case of TCP and 5 clients each running 10 threads. Can you please compare the latency and throughput numbers under similar load?
@codeasashu I see you are running 10 clients each running 10 threads in case of TCP and 5 clients each running 10 threads. Can you please compare the latency and throughput numbers under similar load?
I did earlier. However I am on mac and TCP are affected (only 50% are actually benchmarked) hence I had to bring the numbers down. I will put the same numbers on a linux machine and share the result soon
@codeasashu thanks for all the effort. This helps a great deal!
@soumya-codes Following is the benchmark from my linux machine:
1. Unix sockets
CMD: memtier_benchmark -S /tmp/dice.sock --clients 30 --threads 10 --ratio=1:1 --data-size 256 --requests=100000
Output:
10 Threads
30 Connections per thread
100000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 44804.40 --- --- 3.34145 2.95900 12.92700 23.16700 13296.34
Gets 44804.40 117.40 44687.00 3.33561 2.95900 12.73500 23.16700 1775.00
Waits 0.00 --- --- --- --- --- --- ---
Totals 89608.80 117.40 44687.00 3.33853 2.95900 12.79900 23.16700 15071.34
2. TCP
CMD: memtier_benchmark -s localhost -p 7379 --clients 30 --threads 10 --ratio=1:1 --data-size 256 --requests=100000
Output:
10 Threads
30 Connections per thread
100000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 28489.95 --- --- 5.25245 4.60700 16.89500 25.59900 8454.79
Gets 28489.95 74.65 28415.30 5.24829 4.60700 16.89500 25.59900 1128.67
Waits 0.00 --- --- --- --- --- --- ---
Totals 56979.89 74.65 28415.30 5.25037 4.60700 16.89500 25.59900 9583.47
Machine info
OS: Ubuntu 24.04 LTS Kernel: Linux 6.8.0-40-generic CPU: AMD Ryzen 5 1600 Six-Core Processor (x86_64) Disk: 256GB SSD (Samsung EVO) GPU: VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)
I have attached the latency distribution in the files. PFA tcp.txt unix.txt
@JyotinderSingh Can you escalate this one?
@JyotinderSingh Can you escalate this one?
Yes, will update here soon
As discussed offline, we won't be prioritizing this feature. We truly appreciate the effort you've put into this @codeasashu, however, this does not completely align with the product vision for now.
Closing this PR.
@JyotinderSingh Sure. But I do not understand how a benchmark winner, an additional connection method, can't align with something that resonates on same featureset (i.e real-time). This doesn't even break any existing feature, neither does it affect existing benchmarks. It only provides an additional benefit to a customer who wants to run a in-house, local only instance. This doesn't even need any additional maintenance, as well as plays along really well with any scaling strategy you have in mind.
This is literally, only, 10 lines of additional code.
@arpitbbhayani Let me know where I am misaligned. When the near realtime benchmarks matters, everything matters, as simple as the connection strategy.
@JyotinderSingh Sure. But I do not understand how a benchmark winner, an additional connection method, can't align with something that resonates on same featureset (i.e real-time). This doesn't even break any existing feature, neither does it affect existing benchmarks. It only provides an additional benefit to a customer who wants to run a in-house, local only instance. This doesn't even need any additional maintenance, as well as plays along really well with any scaling strategy you have in mind.
This is literally, only, 10 lines of additional code.
@arpitbbhayani Let me know where I am misaligned. When the near realtime benchmarks matters, everything matters, as simple as the connection strategy.
@codeasashu thanks for bringing out some very valid points. Thanks for your persistence and passion to make the DiceDB better. We had an extensive discussion within the core team regarding your proposal.
Here is our current thinking. You are more than welcome to counter the same.
Let me try to reason out why defering the decision to add support for unix-sockets may be more appropriate at this point of time:
-
While unix-sockets may be a winner in terms of performance, from the product perspective we are not sure if there are enough people looking out for a solution for a very specific use-case.
-
More importantly, our implementation of handling client connections is still evolving. We are currently seeking a significant bottleneck around TCP read/write system calls. We are thus evaluating multiple options/implementations(for example async-io, ebpf, zero-copy etc) to see how we can overcome this bottleneck. We expect a significant refactoring of the existing code in terms of implementation as well as the underlying technology being used.
With these points in mind we are a bit sceptical if this is the right time to adopt unix-sockets. It's not about what it takes to support a technology in the current form of the project. It's more about the tech debt we may incur going forward.
It could be that we are missing some very significant points that you have in mind. It would be really nice if you can join our Thursday weekly meetings and help us understand your points better.
Looking forward to having a face to face discussion with you.
@soumya-codes Thanks for clarifying this. I am completely aligned with your points. I too am passionate on adding any values which can make dicedb a better realtime db.
Looking forward to Thursday call. In the meantime, I will look out for anything that I can add to.