rustic
rustic copied to clipboard
Compression levels?
What are the compression levels? There isn't any guidance in the help display
--set-compression <LEVEL>
set compression level, 0 equals no compression
And which type of compression is used? zstd?
Thanks for the question! Its zstd and allows currently between 0 (no compression) and 22. If not set (and repo version is 2 which is standard), the zstd standard compression level is used, which is 5 IIRC.
I think this should be added in the command or the docu to close your issue.
Is there any way to use the negative/"fast" compression levels? Or just 1-22? https://facebook.github.io/zstd/
Thanks for the question! Actually I didn't dive into zstd and just read somewhere that only the positve levels should be exposed to users.
But yes, negative values also work, in fact AFAIK rust uses the facebook zstd implementation, so all options that are allowed by the standard also work within rustic. I only changed the value "0" (usually set to default compression) into disabling compression at all.
Ah, and to set negative compressions, you have to use rustic-rs config --set-compression=-4 instead of rustic-rs config --set-compression -4 (the latter bails that the option -4 is not supported).
Thanks, yeah doing it without = didn't work.
Here's my results from some crude testing on a wordpress website (80561 files / 16725 directories)
| level | time (s) | faster (x) | size (mb) | smaller (x) |
|---|---|---|---|---|
| 0 | 78 | 1160 | ||
| -2 | 46 | 1.70x | 541 | 2.14x |
| -1 | 45 | 1.73x | 527 | 2.20x |
| 1 | 39 | 2.00x | 467 | 2.49x |
| 3 | 40 | 1.95x | 447 | 2.59x |
| 5 | 65 | 1.20x | 435 | 2.67x |
| 8 | 83 | 0.94x | 428 | 2.71x |
Strange that negative was slower than positive. It seems like default of 3 is a safe bet - lower is barely faster and noticeably larger. Higher gets exponentially slower (level of 10+ takes 30+ minutes - I didn't even let it finish) and not much smaller.
Perhaps some info from here could be added to the FAQ/Readme and then close the issue?
Actually this result is not completely unexpected: The time added by higher compression is probably more than saved due to the lower size to save in the backend.
In fact, the "best" compression level to get a fast backup depends a lot on the CPU you are using AND the possible write speed of your backend (which again may depend on your upload bandwidth if you are using a remote backend).
I agree, we should leave this issue open until the docu is more precise.
Yeah, you're absolutely correct - I didn't think it through that the time was obviously also due to the transfer speed (in this case, primarily the network speeds due to using rclone with onedrive).
Obviously a local repo would have completely different (and much faster results). I'll run some quick tests just for comparison's sake and share them here.
Here's the same stats when copying locally. As expected, uncompressed is the fastest. But barely - obviously the read/write time offsets the cpu savings. Negative makes no speed impact while having much larger sizes. Again, the default of 3 seems like a nice sweet spot (hence why it's the default...). YMMV
| level | time (s) | time (x) | size (mb) | size (x) |
|---|---|---|---|---|
| 0 | 25 | 1160 | ||
| -2 | 27 | 2.89x | 537.4 | 2.16x |
| -1 | 27 | 2.89x | 523.1 | 2.22x |
| 1 | 27 | 2.89x | 464.3 | 2.50x |
| 3 | 29 | 2.69x | 445.1 | 2.61x |
| 5 | 55 | 1.42x | 434.4 | 2.67x |
| 8 | 79 | 0.99x | 426.3 | 2.72x |