Borg takes more than 24 hours to do backup which used to take only 1 or 2 hours
Have you checked borgbackup docs, FAQ, and open Github issues?
Yes
Is this a BUG / ISSUE report or a QUESTION?
Question/Issue
System information. For client/server mode post info for both machines.
Ubuntu server installed on the computer taking backups via CIFS mount from QNAP NAS
Your borg version (borg -V).
borg 1.1.15
Operating system (distribution) and version.
Ubuntu server 20.04.5 LTS
Hardware / network configuration, and filesystems used.
Lenovo M93p/1GB LAN, EXT4
How much data is handled by borg?
About 6TB but there are duplicates so borg backup takes only 3.5TB of data stored.
Full borg commandline that lead to the problem (leave away excludes and passwords)
borg create --list --stats /backup::hw_design-{now:%d-%m-%Y_%H:%M:%S} /media/mred/
Describe the problem you're observing.
Borg was running perfectly since day one and it normally took over an hour to do sometimes 2 or 3 which was excellent due to the fact it was 6TB of data. After including another folder into the /media/mred directory from NAS via CIFS mount, my backups take more than 24 hours to do. The additional directory only has 160MB of data which should be nothing. I don't know how to approach this problem hence I'm asking for help.
Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
Include any warning/errors/backtraces from the system logs
Strange effect.
You could try:
- enable more logging, so you maybe can see where/why it gets slow:
--info --list --filter=AME - check whether files have status as expected (only really modified files show up as
M- see also the FAQ about this / about thefiles cache) - use a more recent borg release (e.g. from github releases page or borgbackup ppa)
- does it get faster again if you remove the addtl directory again? how many files are in that addtl. directory?
- how long does it take if you only back up that addtl. directory?
Hi Thomas, Thanks for the reply. Yep it is strange as you said.
- The new folder doesn't have many files at all -> 142 files in 61 folders.
- I haven't tried removing the directory but will sure do to test the effect. Also I'm looking at the borg as it runs and many and I mean many files in folders that haven't been touched for years are showing M in front of them also many are showing "d" in front of them.
- I have read the FAQ and will have to do some testing but since I'm backing up very large amount of files it will be a nightmare. Thanks for your help
M means that borg thinks the file is modified. This slows it down significantly as it will read and chunk the file again.
d is just for directory, nothing special here.
Likely you have some problem with the files cache and borg is not running as fast as it usually could (likely not a problem inside borg, but rather something outside).
E.g. that could help:
https://borgbackup.readthedocs.io/en/1.2.2/faq.html#why-is-backup-slow-for-me (please note that I linked to rather recent docs, because they explain some stuff better. you can select your borg version on the lower right of that page to get docs exactly matching your version.)
It also would make it easier to debug if you used a bit more recent borg version, e.g. that very helpful --debug-topic=files_cache option is not yet present in 1.1.15. The ubuntu borgbackup maintainer's ppa has borg 1.2.x or you could use the binary from github releases page.
OK I think I know what happened. Originally I have my files/directories mounted under /media/mred from a QNAP NAS where it was one directory with many subdirectories. Now because I needed to add separate directory from NAS and I wanted them under one directory I have mounted 2 directories (1 additional) from NAS to borg machine under one folder /media/mred which I believe doesn't match with borg cache and there is a TTL set which means it takes 20 backups until everything in that borg cache will be replaced. Am I correct? Thanks
it will be one time slow and then much faster again.
Well, it wasn't only one time, because this has been happening for a week now but I must say there was one backup that was done in 10 hours, all the rest is more than 24 hours.
All the untouched files are showing M in front and I know they are untouched because they have been archived in folders for long time.
Hmm, changing all the absolute paths will create cache misses. But once it was read/chunked/hashed at the new position, it will have a new files cache entry with the new abs. path.
But there are other influences, maybe you have multiple problems:
- abs path must match
- size, ctime, inode must match (default, can be change with
--files-cache=size,ctimefor example if your inode numbers are not stable) - if you touch your ctimes somehow and can not stop that, you can also use mtime (but that is less safe, it has a reason why it is ctime by default)
Using a new borg with --debug-topic=files_cache would show the problem pretty clearly.
I haven't touched any settings, ever. All I've done I have remounted drives into the directory using CIFS, pretty much straight forward process which took me about 5 minutes all up. I haven't touched the borg settings or command at all.
Using a new borg with --debug-topic=files_cache would show the problem pretty clearly.<
Do you mean to add these option while running the backup ?
Thomas,
I have tried running borg backup with --info --list --filter=AME and also tried with --debug-topic=files_cache and could not see any other or different output than I saw with my normal command. Also all files had M in front.
Any suggestions? Thank you
For --debug-topic=files_cache you need a newer borg than 1.1.15.
Just to let you know. I haven't touched anything and last backup took "only" 13 hours instead of 24+ What's the best way to upgrade the borg in Ubuntu 20.04 ? Thanks
The ubuntu borgbackup maintainer's ppa has borg 1.2.x or you could use the binary from github releases page.
A question: are you calling borg many times with different directories on the same repo? If so how many calls every cycle? Or you are calling all directories in a single call?
@djxpace does it work quicker now?
no response, so guess it was solved.