"Remotely" show/monitor status of running backup
Why?
Especially when running large backups one might want to query the status of a backup while it is running.
One should be able to get the status, remaining time of a running backup without interrupting it.
How?
I've titled this with "remotely", because usually when your backup runs in a cron job e.g. you don't can/want to use -p as you usually redirect its stdout/sterr output into a log file.
That's why I'd propose to integrate a way so that one can run a borg command (borg check status??) to get the status of (all) other running backups on the machine. Alternatively also the repo could be given, so that this is used to get the status of the running backups (maybe by still connecting to the running process locally)…
So the usage should be:
- Backup is running.
- User starts a new terminal, enters borg command and gets the status of the backup… (either continue – like
tail -f– to update the status or just exit and print the result once – like usualtail)
What?
In a simple way it could look like pv and show:
- what is running (archive name)
- progress bar and/or percentage
- "speed"/data rate (byte/sec)
- amount of time spent
- data processed
- (maybe) data to process or whole backup size
- estimated time until end (probably the most important stat)
- maybe some borg-specifc things like chunks
- currently processed file (like
-p)
Signal the process via SIGINFO/SIGUSR1.
Ah okay /bin/kill <pid> -s USR1 seems to show this:
/[…]/data/0/XX52 X.09 MB/X.09 MB
This is not only few data/information, but also it is shown in the process, which is currently running the backup. So when this process is executed via cron and piped to a log file this does not help much. I'd actually like to see this output where I run the info command in the other shell.
Well, in fact:
kill -s SIGUSR1 $(pgrep ^borg)
... does nothing (on Ubuntu 16.04.2 LTS and Borg 1.0.7)
You need the appropriate permissions to signal the borg process. E.g. a regular user cannot signal root's processes.
Hmm,
sudo kill -s SIGUSR1 $(pgrep ^borg)
does the same. Nothing.
And BTW, I am running borg as normal user (however, I am backuping my home directory only). Should I run it always as root?
- Is the $(pgrep ^borg) part correct?
- Are you looking at the output of the borg process? Sending a signal does not print to the terminal where you run "kill".
Should I run it always as root?
No
Hi,
well, I have two terminals open. In one, I am running
borg create --compression zlib,4 ssh://[email protected]/media/backup::mybackup ~ --exclude '*.vdi' --exclude '~/Downloads'
On the other I run:
sudo kill -s SIGUSR1 $(pgrep ^borg)
There is no output in the first (or second) terminal. Aynway,
sudo ps -A | grep borg
or
ps -A | grep borg
gives the same:
3093 pts/19 00:01:23 borg
And of course:
sudo kill -s SIGUSR1 3093
or
kill -s SIGUSR1 3093
gives nothing.
I don't know. Perhaps Apparmor/SELinux or something like that (check system logs), or stuck in an uninterruptible syscall.
--exclude '~/Downloads'
This won't actually work, because Borg does not expand variables and such (~). Remove the quotes in this case.
Syslog does not show anything... anyway, thanks for hint regarding exclude.
How does "signalling the process" achieve the aim of the issue? Backups are run through cron jobs in normal operations, detached from a terminal. Having that process issue something to stdout is hardly helpful.
When I want to know what Borg is doing I issue the command:
watch -n0,1 "lsof -c borg -l -X|grep -E 'r REG|w REG|r DIR'"
This show me what file/dir Borg is reading or writing to, and that's it. When Borg is compressing, looking up some index or something else, there is no way to know with lsof. Also, I have no idea in what sequency Borg runs thought a dir, and without that "sort" of information one can not know how much of the dir is lefting.
So, if there was a way to "ask" borg proccess what it is doing, that would be awesome.
This would be really useful in case of creating large archives that take a while. I hope this issue gets some attention.
How does "signalling the process" achieve the aim of the issue? Backups are run through cron jobs in normal operations, detached from a terminal. Having that process issue something to stdout is hardly helpful.
You don't log the output of cronjobs?
If one writes a log file to disk, one can use less +F borg.log to follow it in realtime.
...which still does not solve this issue. As you hopefully won't run borg with --progress and save the (then huge) output to disk.
BTW the way I imagine to use this is like progress, which can "remotely" display the status of cp, mv, etc. Or maybe you just integrate borg support there… :wink
One workaround for huge output would be to have the log file output to /tmp.
What I do is use NetworkManager's dispatcher scripts to run my backup when I first connect to the network. This means I can use journalctl to view the output. This is all outside of borgbackup though, an internal solution would be good.