TW
TW
IIRC there was a different approach with socat/netcat or so, check docs / issue tracker.
compaction copies valid entries from old source segment files to new target segment files, skipping over the logically deleted stuff. i had a look at the code and `complete_xfer` is...
for cases when `borg compact` could take rather long (e.g. after deleting lots of archives in one borg delete/prune command), it might be a good idea to slowly step-by-step lower...
Did a quick test running a `borg create` against a `ssh:` remote repo: it behaves now as expected: - no "broken pipe" - successfully writes checkpoint
There are some other places with subprocesses: - `--content-from-command` - `--paths-from-command` - `import-tar / export-tar` (for the (de)compression filter process) Guess we do not want these killed by ctrl-c either....
- we already store the full commandline - reading stuff from files can result in a lot of data (could have 1 line per file * millions of files).
Did you try `borg config`?
About a default `additional_free_space`: how would you compute the default value to fit most future scenarios?
Right, only local `borg config`. But I think there's also some temporary free space needed for repo index, hints, etc - and the size of these depend on the size...
i had a quick look and yes, i think an option `--additional-free-space=X` similar to `--storage-quota=Y` would make sense.