website
website copied to clipboard
Lift the etcd limit from 8GiB to 100GiB
As per performance improvements to etcd size limits have been evaluated to 100GB instead of 8GB. https://www.cncf.io/blog/2019/05/09/performance-optimization-of-etcd-in-web-scale-data-scenario/
Contributes to issue #588
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: ronaldngounou Once this PR has been reviewed and has the lgtm label, please assign ivanvc for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Lint issues fixed:
content/en/docs/v3.4/faq.md:29:291 MD059/descriptive-link-text
Link text should be descriptive [Context: "[here]"]
(https://github.com/DavidAnson/markdownlint/blob/main/doc/md059.md)
If you're doing this refactoring, I'd like to make it clear to users that the 100GB is a recommended maximum size, and not a hard limit. This would mean different text in a couple of places. I don't know what the actual hard limit is; probably need to look at the boltDB code.
Could you please suggest a wording that we should have in the meatime?
For content/en/blog/2023/how_to_debug_large_db_size_issue.md let's take it out of this PR, and open a separate effort to convert the blog post into an Operations doc.
May I ask if there is any data in etcd that affects the cluster during data compression and fragmentation after storing 50GB of data? And how long does it take for large-scale insertion/query operations after completing the above operations