Make `system.backups` table persistent
Closes #43995
Changelog category (leave one):
- New Feature
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Added the possibility to specify an engine (defaulted to the persistent MergeTree) for the system.backups table.
Documentation entry for user-facing changes
- [x] Documentation is written (mandatory for new features)
This is an automated comment for commit c7beb44f3139cf41510617db81d4e41fcdfaaebe with description of existing statuses. It's updated for the latest CI running
❌ Click here to open a full report in a separate page
Successful checks
| Check name | Description | Status |
|---|---|---|
| AST fuzzer | Runs randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help | ✅ success |
| CI running | A meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR | ✅ success |
| ClickHouse build check | Builds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process | ✅ success |
| Compatibility check | Checks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help | ✅ success |
| Docker image for servers | The check to build and optionally push the mentioned image to docker hub | ✅ success |
| Docs Check | Builds and tests the documentation | ✅ success |
| Fast test | Normally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here | ✅ success |
| Flaky tests | Checks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc | ✅ success |
| Install packages | Checks that the built packages are installable in a clear environment | ✅ success |
| Integration tests | The integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests | ✅ success |
| Mergeable Check | Checks if all other necessary checks are successful | ✅ success |
| Performance Comparison | Measure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests | ✅ success |
| Push to Dockerhub | The check for building and pushing the CI related docker images to docker hub | ✅ success |
| SQLTest | There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS | ✅ success |
| SQLancer | Fuzzing tests that detect logical bugs with SQLancer tool | ✅ success |
| Sqllogic | Run clickhouse on the sqllogic test set against sqlite and checks that all statements are passed | ✅ success |
| Stateful tests | Runs stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc | ✅ success |
| Stress test | Runs stateless functional tests concurrently from several clients to detect concurrency-related errors | ✅ success |
| Style Check | Runs a set of checks to keep the code style clean. If some of tests failed, see the related log from the report | ✅ success |
| Unit tests | Runs the unit tests for different release types | ✅ success |
| Upgrade check | Runs stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts | ✅ success |
| Check name | Description | Status |
|---|---|---|
| Stateless tests | Runs stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc | ❌ failure |
Could someone, please, assign can be tested label to this PR.
Looks like some tests still need some adaptation to the new feature.
While I'm working on it, please pay attention to some interesting - and maybe controversial - aspects.
- I isolated an entity responsible for the storage maintenance within the
SystemLogclass and encapsulated it in the newSystemLogStorageclass - in order to create a new instrument which might come in handy in the future. And, of course, I re-used it myself, when I added a new functionality (BackupsStorage). NowSystemLogStorageclass duplicates the relevant code inSystemLog, and this is subject for subsequent refactoring in a separate PR. - The same thing with an engine configuration routine (
getEngineDefinitionFromConfig()function). - I personally don't like too much
ENGINEclause parsing ingetEngineDefinitionFromConfig()function. I tried not to invent the wheel but I'm sure it can be done better. BackupsStorageuses SQL query "ALTER TABLE ... UPDATE" which I find highly readable. Moreover, this approach was already used inInterpreterDeleteQuery. However, I found this approach also a bit redundant in a c++ code, so if you have any ideas about other (elegant, robust and still readable) approaches to update the table, please feel free to share.- As the first approach, I allowed the set of only two engines to be specified for the
system.backupstable in order to play it safe (e.g. the*Logfamily doesn't allow mutations), but it can be easily extended.MergeTree(default) makes the table persistent,Memoryacts pretty much like the formerSystemBackupsengine.
The PR is far from "ready to merge" yet.
Figuring out the way to work around the omnipresent error like "system.backup already exists" (because in the previous CI run it's persistent by default) when launching the previous versions of the server's binary (e.g. test_backward_compatibility/test_in_memory_parts_still_read.py::test_in_memory_parts_still_read).
Some tests keep randomly failing during the consecutive CI runs but it looks like they have nothing to do with my changes. And also some build jobs seem to be recently broken.
At the same time, I hope I've successfully tackled all the issues that I encountered implementing and testing the new feature.
The main thing is a new configuration mechanism which keeps backward compatibility by default and allows to fine tune the required behavior.
Now system.backups by default has the same implementation as before, can be explicitly configured to have one of currently allowed engines (SystemBackups (which is also a default one), Memory and MergeTree) and, what is essential, can rename already existing persistent system.backups to system.backups_N table upon the server start if the engine or any of its parameters have been changed.
@vitlibar, what do you think about this PR? If you have any concerns or ideas, please feel free to share.
Is it worth it? Maybe system.backup_log is enough?
Is it worth it? Maybe
system.backup_logis enough?
@alexey-milovidov The PR is not critical for us, you may close it if needed, we may probably re-open it later.
Is it worth it? Maybe system.backup_log is enough?
BTW I guess it could be emulated with the MV over system.backup_log, more or less the same way it is implemented in this PR