duplicacy
duplicacy copied to clipboard
User configurable number of retries for the backup command
I've been trying to upload a 600 GB-odd repository to gdrive since the last 1 week to no avail. Every time I log back into my system to check the status - the duplicacy backup command is seen to have failed with an i/o timeout exception. Yes, my internet doesn't have 100% uptime - it can and does have few minutes of downtime here and then throughout the day.
I have to say - while the underlying architecture may be solid (atleast on paper, have yet to fully test it out), the command line is lacking very basic options/features.
A basic user-configurable retry mechanism with exponential delay was definitely expected. I saw a couple of issues on duplicacy.com where you say you've set it to 8 retries and so on in the latest version. Why should it be hardcoded?
Uploaded chunk 29753 size 3259433, 1.88MB/s 1 day 16:31:57 34.3% Uploaded chunk 29739 size 8811587, 1.88MB/s 1 day 16:31:50 34.3% Failed to upload the chunk 5f21daa9b13a03c3faa8cc6ea2169e117b4c355c8671cdcf3613fe31d696bdf6: Post https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id&uploadType=multipart: dial tcp: lookup www.googleapis.com on 127.0.0.53:53: read udp 127.0.0.1:57845->127.0.0.53:53: i/o timeout Incomplete snapshot saved to /var/duplicacy/repository/.duplicacy/incomplete
There is a discussion on improving the retry mechanism in the Google Drive backend: https://github.com/gilbertchen/duplicacy/issues/144
I think in general, an option to add a global retry option to the backup command would be useful. This would be a catch-all for any type of storage provider.
I do notice a lot of issue requests for retries for all different types of storage providers here, and believe those could be useful too for very transient and specific errors, but a global one would help with all providers (even when using a local folder as a backup destination that is actually a mapped drive to another networked computer over an unreliable network).
Something like:
duplicacy backup -retries 10 -retry-wait 10s
Upon an error, it would wait the retry-wait
time span, and restart the backup - maybe even from the beginning like it does if you manually relaunched it from the UI (of course skipping forward to where it last had a verified chunk upload).