gilbertchen
gilbertchen
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/roadmap-for-the-cli/2709/1
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/possibility-of-a-hybrid-chunker/6991/2
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/how-to-add-webdav-storage-with-http-only-in-web-edition/7227/26
Prior to version 1.2 you can set the compression level (using the standard zlib numbers 0-9 or -1) when initializing the storage. However, after version 1.2 I decided to switch...
Currently Duplicacy doesn't support OneDrive business. I already have it under my radar but it may take a while for me to get to that point... Thank you for reporting...
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/feature-request-periodically-write-the-list-of-verified-chunks/8014/6
The memory usage is highly proportional to the number of files, not the total size of the directory. How many files are there and how large is the physical memory?
413K files are not a lot. It is possible to track the memory while the backup is running?
You can run `top` while the backup is running.
Yes, the current implementation maintains two file lists, the list of local files and the list of remote files stored in the previous backup, and then compares these two lists...