borg icon indicating copy to clipboard operation
borg copied to clipboard

additional_free_space and running out of repo space

Open frispete opened this issue 4 years ago • 3 comments

Have you checked borgbackup docs, FAQ, and open Github issues?

Yes, it's part of the repo initialization. Unfortunately, I forgot to provide a sane unit. :disappointed: I should have done:

borg config /backup/borg additional_free_space 10G

while I did:

borg config /backup/borg additional_free_space 10

I noticed the warning about xfs, I'm not using quotas, and nothing but borg is touching this device.

Is this a BUG / ISSUE report or a QUESTION?

I would call it a bug still, but could also colored as a PEBKAC. :roll_eyes:

System information. For client/server mode post info for both machines.

Your borg version (borg -V).

Borg server: Platform: Linux server 5.12.13-lp152.2-preempt #1 SMP PREEMPT Fri Jun 25 18:31:36 UTC 2021 (e3b385c) x86_64
Borg server: Linux:   
Borg server: Borg: 1.1.17  Python: CPython 3.6.12 msgpack: 0.5.6.+borg1
Borg server: PID: 77220  CWD: /home/user
Borg server: sys.argv: ['/usr/bin/borg', 'serve', '--umask=077']
Borg server: SSH_ORIGINAL_COMMAND: None
Platform: Linux server 5.12.13-lp152.2-preempt #1 SMP PREEMPT Fri Jun 25 18:31:36 UTC 2021 (e3b385c) x86_64
Linux:   
Borg: 1.1.17  Python: CPython 3.6.12 msgpack: 0.5.6.+borg1
PID: 77210  CWD: /root
sys.argv: ['/usr/bin/borg', 'list']
SSH_ORIGINAL_COMMAND: None

Operating system (distribution) and version.

NAME="openSUSE Leap"
VERSION="15.2"

I'm maintaining the borg package for this distro, hence using the current build.

Hardware / network configuration, and filesystems used.

$ df -hT /backup
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sde       xfs   9.1T  9.1T  104K 100% /backup

How much data is handled by borg?

A lot.

Full borg commandline that lead to the problem (leave away excludes and passwords)

borg list

Describe the problem you're observing.

I've setup borg to run fully automated. Well almost. Noticing, that free space on the backup medium is going to be tight, I adjusted the prune parameters from a couple of repos. Unfortunnately, I catched a strong flu, a week ago and failed to monitor its operation. This is further accompanied with my sons strong activities in Blender...

Consequently, the backup media ran out of space.

My issue now is, that any borg operation results in the following traceback. Unfortunately, there's nothing else on this drive, but the borg backup, hence nothing that I can do to mitigate the situation, other than removing anything from the backup, but this isn't possible, at least not with borg itself, due to:

Traceback (most recent call last):

  File "/usr/lib64/python3.6/site-packages/borg/remote.py", line 247, in serve
    res = f(**args)

  File "/usr/lib64/python3.6/site-packages/borg/remote.py", line 375, in open
    self.repository.__enter__()  # clean exit handled by serve() method

  File "/usr/lib64/python3.6/site-packages/borg/repository.py", line 190, in __enter__
    self.open(self.path, bool(self.exclusive), lock_wait=self.lock_wait, lock=self.do_lock)

  File "/usr/lib64/python3.6/site-packages/borg/repository.py", line 421, in open
    self.lock = Lock(os.path.join(path, 'lock'), exclusive, timeout=lock_wait, kill_stale_locks=hostname_is_unique()).acquire()

  File "/usr/lib64/python3.6/site-packages/borg/locking.py", line 359, in acquire
    with self._lock:

  File "/usr/lib64/python3.6/site-packages/borg/locking.py", line 114, in __enter__
    return self.acquire()

  File "/usr/lib64/python3.6/site-packages/borg/locking.py", line 138, in acquire
    raise LockFailed(self.path, str(err)) from None

borg.locking.LockFailed: Failed to create/acquire the lock /backup/borg/lock.exclusive ([Errno 28] No space left on device: '/backup/borg/lock.exclusive').

Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.

Yes, the reason is obvious.

Now that the child has fallen into the well, is borg able to recover from such a pathologic situation somehow?

While at it, shouldn't borg cope with the additional_free_space configuration gracefully? (eg. during backup, check, if enough free space is available, and refuse to create another backup otherwise.)

At the moment all operations result in the traceback mentioned above.

frispete avatar Nov 27 '21 17:11 frispete

You need to make space on the target filesystem. If that is not possible, you could temporarily copy the repo to a bigger medium and then reduce its size.

This is not a bug because it is documented that running out of space must be avoided and there is even a means to do that, if used correctly.

What we could maybe do to avoid such fails is to add a sanity check to the additional_free_space setting code, like rejecting everything < 1M or so.

ThomasWaldmann avatar Nov 27 '21 18:11 ThomasWaldmann

Thanks @ThomasWaldmann, and yes, such a sanity check would be useful.

Can I reduce the additional_free_space to zero and have a chance to do something useful with it? I need to get a prune operation working sucessfully. What amount of free space would require this?

frispete avatar Nov 27 '21 19:11 frispete

I'm still not convinced, that a traceback is the right behavior for this state.

frispete avatar Nov 27 '21 19:11 frispete

The sanity check (for the case that the user provides a much too low free space value, e.g. by forgetting the unit) was recently implemented.

ThomasWaldmann avatar May 11 '25 08:05 ThomasWaldmann