libnetwork
libnetwork copied to clipboard
[WIP] NetworkDB performance improvements
CPU profile showed how the mRandomNodes was taking ~30% of the CPU time of the gossip cycle.
Changing the data structure from a []string to a map allowed to improve the performance of all the functions that were using it.
Following the comparison of the benchmarks before and after the change
Benchmark on:
goos: darwin
goarch: amd64
AddNodeNetwork (90% faster):
benchmark old ns/op new ns/op delta
BenchmarkAddNetworkNode-4 1859 181 -90.26%
benchmark old allocs new allocs delta
BenchmarkAddNetworkNode-4 1 1 +0.00%
benchmark old bytes new bytes delta
BenchmarkAddNetworkNode-4 15 15 +0.00%
DelNodeNetwork (8% faster):
benchmark old ns/op new ns/op delta
BenchmarkDeleteNetworkNode-4 11.0 10.1 -8.18%
benchmark old allocs new allocs delta
BenchmarkDeleteNetworkNode-4 0 0 +0.00%
benchmark old bytes new bytes delta
BenchmarkDeleteNetworkNode-4 0 0 +0.00%
RandomNode (90% faster and 93% less allocations):
benchmark old ns/op new ns/op delta
BenchmarkRandomNodes-4 1830 172 -90.60%
benchmark old allocs new allocs delta
BenchmarkRandomNodes-4 16 1 -93.75%
benchmark old bytes new bytes delta
BenchmarkRandomNodes-4 535 48 -91.03%
Full profile:
Detail:

Signed-off-by: Flavio Crisciani [email protected]
Codecov Report
:exclamation: No coverage uploaded for pull request base (
master@83862f4). Click here to learn what that means. The diff coverage is87.17%.
@@ Coverage Diff @@
## master #2046 +/- ##
=========================================
Coverage ? 40.05%
=========================================
Files ? 138
Lines ? 22108
Branches ? 0
=========================================
Hits ? 8856
Misses ? 11954
Partials ? 1298
| Impacted Files | Coverage Δ | |
|---|---|---|
| networkdb/delegate.go | 73.97% <100%> (ø) |
|
| networkdb/networkdb.go | 65.61% <85.71%> (ø) |
|
| networkdb/cluster.go | 63.75% <85.71%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update 83862f4...6ba12b1. Read the comment docs.
@fcrisciani will this change potentially solve the limitation on the /24 cidr block for overlay/vip network? i can link the issue for reference (if needed, but I didn't want to cross-pollinate unless it's related)
@kcrawley actually this is still pretty experimental and I think I will do other changes before having this one ready. This won't make more scalable ingress because there the bottleneck is the number of IPVS rules that have to be configured into the containers. For that there is other work that is being done that will reduce the complexity from O(n^2) to O(n)