sqld
sqld copied to clipboard
Enable Shared-Cache Mode for opened database
According to documentation this mode can increase database performance: https://www.sqlite.org/sharedcache.html
My tests showed improvement both when using 1 client and 100 clients.
For sigle client the throughput increased by 14%. For 100 clients the throughput increased by 27%. On the same time latencies have been lowered.
The results for sqld running locally:
1 client with no sharedcache: Summary: Total: 60.0012 secs Slowest: 0.0221 secs Fastest: 0.0009 secs Average: 0.0011 secs Requests/sec: 895.4821
Latency distribution:
10% in 0.0009 secs
25% in 0.0010 secs
50% in 0.0011 secs
75% in 0.0012 secs
90% in 0.0013 secs
95% in 0.0014 secs
99% in 0.0018 secs
1 client with sharedcache: Summary: Total: 60.0008 secs Slowest: 0.0187 secs Fastest: 0.0008 secs Average: 0.0010 secs Requests/sec: 1042.3534 Latency distribution: 10% in 0.0008 secs 25% in 0.0009 secs 50% in 0.0009 secs 75% in 0.0010 secs 90% in 0.0011 secs 95% in 0.0012 secs 99% in 0.0016 secs
100 clients with no sharedcache: Summary: Total: 60.0359 secs Slowest: 0.0962 secs Fastest: 0.0103 secs Average: 0.0368 secs Requests/sec: 2716.9230 Latency distribution: 10% in 0.0311 secs 25% in 0.0338 secs 50% in 0.0368 secs 75% in 0.0396 secs 90% in 0.0422 secs 95% in 0.0439 secs 99% in 0.0480 secs
100 clients with sharedcache: Summary: Total: 60.0218 secs Slowest: 0.0828 secs Fastest: 0.0075 secs Average: 0.0267 secs Requests/sec: 3744.2703 Latency distribution: 10% in 0.0225 secs 25% in 0.0242 secs 50% in 0.0262 secs 75% in 0.0287 secs 90% in 0.0314 secs 95% in 0.0332 secs 99% in 0.0377 secs
Single client results with no sharedcache:
╰──> hey -c 1 -z 1m -m POST -d '{ "statements": [{"q" : "SELECT * FROM t"}] }' http://127.0.0.1:8080/
Summary:
Total: 60.0012 secs
Slowest: 0.0221 secs
Fastest: 0.0009 secs
Average: 0.0011 secs
Requests/sec: 895.4821
Response time histogram:
0.001 [1] |
0.003 [53670] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.005 [32] |
0.007 [9] |
0.009 [4] |
0.011 [4] |
0.014 [6] |
0.016 [1] |
0.018 [1] |
0.020 [0] |
0.022 [2] |
Latency distribution:
10% in 0.0009 secs
25% in 0.0010 secs
50% in 0.0011 secs
75% in 0.0012 secs
90% in 0.0013 secs
95% in 0.0014 secs
99% in 0.0018 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0000 secs, 0.0009 secs, 0.0221 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0000 secs, 0.0000 secs, 0.0003 secs
resp wait: 0.0011 secs, 0.0008 secs, 0.0220 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0007 secs
Status code distribution:
[200] 53730 responses
Single client results with sharedcache:
╰──> hey -c 1 -z 1m -m POST -d '{ "statements": [{"q" : "SELECT * FROM t"}] }' http://127.0.0.1:8080/
Summary:
Total: 60.0008 secs
Slowest: 0.0187 secs
Fastest: 0.0008 secs
Average: 0.0010 secs
Requests/sec: 1042.3534
Response time histogram:
0.001 [1] |
0.003 [62525] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.004 [13] |
0.006 [1] |
0.008 [0] |
0.010 [0] |
0.012 [0] |
0.013 [1] |
0.015 [0] |
0.017 [0] |
0.019 [1] |
Latency distribution:
10% in 0.0008 secs
25% in 0.0009 secs
50% in 0.0009 secs
75% in 0.0010 secs
90% in 0.0011 secs
95% in 0.0012 secs
99% in 0.0016 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0000 secs, 0.0008 secs, 0.0187 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0000 secs, 0.0000 secs, 0.0004 secs
resp wait: 0.0009 secs, 0.0007 secs, 0.0185 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0006 secs
Status code distribution:
[200] 62542 responses
100 clients with no sharedcache:
╰──> hey -c 100 -z 1m -m POST -d '{ "statements": [{"q" : "SELECT * FROM t"}] }' http://127.0.0.1:8080/
Summary:
Total: 60.0359 secs
Slowest: 0.0962 secs
Fastest: 0.0103 secs
Average: 0.0368 secs
Requests/sec: 2716.9230
Response time histogram:
0.010 [1] |
0.019 [14] |
0.027 [2523] |■
0.036 [68122] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.045 [86519] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.053 [5463] |■■■
0.062 [335] |
0.070 [121] |
0.079 [9] |
0.088 [4] |
0.096 [2] |
Latency distribution:
10% in 0.0311 secs
25% in 0.0338 secs
50% in 0.0368 secs
75% in 0.0396 secs
90% in 0.0422 secs
95% in 0.0439 secs
99% in 0.0480 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0000 secs, 0.0103 secs, 0.0962 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0000 secs, 0.0000 secs, 0.0015 secs
resp wait: 0.0367 secs, 0.0074 secs, 0.0961 secs
resp read: 0.0001 secs, 0.0000 secs, 0.0012 secs
Status code distribution:
[200] 163113 responses
100 clients with sharedcache:
╰──> hey -c 100 -z 1m -m POST -d '{ "statements": [{"q" : "SELECT * FROM t"}] }' http://127.0.0.1:8080/
Summary:
Total: 60.0218 secs
Slowest: 0.0828 secs
Fastest: 0.0075 secs
Average: 0.0267 secs
Requests/sec: 3744.2703
Response time histogram:
0.008 [1] |
0.015 [11] |
0.023 [24283] |■■■■■■
0.030 [164856] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.038 [33285] |■■■■■■■■
0.045 [1787] |
0.053 [333] |
0.060 [111] |
0.068 [55] |
0.075 [11] |
0.083 [5] |
Latency distribution:
10% in 0.0225 secs
25% in 0.0242 secs
50% in 0.0262 secs
75% in 0.0287 secs
90% in 0.0314 secs
95% in 0.0332 secs
99% in 0.0377 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0000 secs, 0.0075 secs, 0.0828 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0000 secs, 0.0000 secs, 0.0048 secs
resp wait: 0.0266 secs, 0.0058 secs, 0.0827 secs
resp read: 0.0001 secs, 0.0000 secs, 0.0056 secs
Status code distribution:
[200] 224738 responses
I'm about to run benchmarks for writes as well