sanoid
sanoid copied to clipboard
Add Zstandard Compression Options
Newer versions of zstd provide very useful options for optimising compression:
-
--adapt
will automatically adjust the compression level on the fly, depending on the available network speed. Slower net means zstd will compress harder, and vice versa. -
--long
allows zstd to use more memory, so will be able to find longer-range redundancies. -
-T0
enables multi-threading, so it will use all available CPU cores.
More info about these options, and more, here: https://code.fb.com/core-data/zstandard/
I've simply added this to the syncoid COMPRESS_ARGS for my needs:
'zstd-adapt-long' => {
rawcmd => '/usr/bin/zstd',
args => '--adapt --long -T0',
decomrawcmd => '/usr/bin/zstd',
decomargs => '-dc',
},
I haven't opened a PR, because it might be better to allow the individual options to be toggled. But that looks like it could be quite a major change.
I note that the manpage states of adapt:
… can remain stuck at low speed when combined with multiple worker threads (>=2).
yeah, the docs do say that, but in my experience, it does a pretty good job of maxing the compression. And at worst, it is at a low compression that will still do some amount of compression.
You can use the following command to test the adaptive compression level of zstd (the value in the parentheses at the beginning of the line). You can adjust the rate limit (-L 10m
) in the pv
command to change how much buffering is going on or test with a more compressible stream than /dev/urandom
.
cat /dev/urandom | zstd -c --adapt -v -T0 | pv -L 10m -q > /dev/null
adapt can also take in a min and max compression level if we want to specify a window of compression levels other than the default.
e.g. --adapt=min=1,max=22