Improve memory alignment
This runs betteralign to pack structs smarter. Originally proposed by @mrueg in #673
Is there any improvement after change? If the struct keeps change, there is no way to track it until we run validation or something like that in CI.
Is there any improvement after change?
In an (IDLE) OpenShift cluster I couldn't really quantify an improvement at all. I've had higher memory usage with this PR, which I think comes from the difference between v1.3.10 and v1.4, also the golang version was different. Didn't dig much further as to why, read/write latency-wise there was also no measurable improvement between two clusters.
I'm going to take a single-node config for a run next time (maybe this Friday), will report if I have found anything meaningful to share.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: tjungblu Once this PR has been reviewed and has the lgtm label, please assign ptabor for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
According to the PR benchmarks, there's an improvement but it seems to be marginal (about 1%). Refer to: https://github.com/etcd-io/bbolt/actions/runs/9834932525?pr=780.
The question is what we would expect with better memory alignment, the structs that are very often in memory would be node/page/inode:
/home/tjungblu/git/bbolt/node.go:11:11: 16 bytes saved: struct with 88 pointer bytes could be 72
/home/tjungblu/git/bbolt/node.go:603:12: 16 bytes saved: struct with 48 pointer bytes could be 32
/home/tjungblu/git/bbolt/page.go:145:15: 8 bytes saved: struct with 16 pointer bytes could be 8
/home/tjungblu/git/bbolt/tx.go:25:9: 104 bytes saved: struct with 192 pointer bytes could be 88
I would expect little less memory usage by etcd, only marginally better throughput/performance because of the better cache usage. In multi-node etcd I would not even think this is measurable due to the network latency between the peers.
I'll run some kube-burner tests today on single-node openshift, just rebased this over the 1.3.10 release to have a better comparison.
Some preliminary findings from single node openshift. I've been running kube-burner with the api-intensive example, which basically creates a bunch of namespaces and creates some pods, updates status etc. It adds about 30mb of data to etcd (which ends up with 140mb of dbsize), so not very large at all.
With this alignment improvement, I do see about 360 mb of resident RAM usage for etcd, without it's only 330 mb. Everything else is being equal, surprisingly the GRPC GET requests are more than 2x faster (10ms vs. 24ms) with this alignment. That improvement seems too good to be true though, so that must have a different cause.
Take this with a huge grain of salt, there is just too much stuff on top of bbolt, could be that some operator made more watch requests than before or some other component created more events.
Guess we have to resort to the boring synthetic tests that we have on bbolt itself, maybe one of our bigger consumers have some time to test drive this with the bigger bbolt files?
Thanks for following up on that!
A few options to look if this improves anything:
- https://www.parca.dev/ Continuous Profiling and seeing how much time was spent in individual cpu profiles.
etcdctl check perf/etcdctl check datascale- https://github.com/etcd-io/etcd/tree/e7f572914d79f0705b3dc8ca28d9a14b0f854d49/hack/benchmark
- https://github.com/etcd-io/etcd/tree/main/tools/benchmark / https://etcd.io/docs/v3.5/op-guide/performance/
@tjungblu: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-bbolt-test-2-cpu-arm64 | 97dddf58d21611ae61f16c16bd944712616cb48b | link | true | /test pull-bbolt-test-2-cpu-arm64 |
| pull-bbolt-test-4-cpu-arm64 | 97dddf58d21611ae61f16c16bd944712616cb48b | link | true | /test pull-bbolt-test-4-cpu-arm64 |
| pull-bbolt-test-4-cpu-race-arm64 | 97dddf58d21611ae61f16c16bd944712616cb48b | link | true | /test pull-bbolt-test-4-cpu-race-arm64 |
| pull-bbolt-robustness-arm64 | 97dddf58d21611ae61f16c16bd944712616cb48b | link | true | /test pull-bbolt-robustness-arm64 |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.