bupstash icon indicating copy to clipboard operation
bupstash copied to clipboard

Can only run one `bupstash put` per user

Open unai-ndz opened this issue 4 years ago • 3 comments

bupstash put will fail with bupstash put: database is locked if it's already running with the same user, even if the data and repository are different.

exporting a different HOME or cache path makes bupstash use a different .cache/bupstash/bupstash.sendlog and works around the problem.

Is this limitation on purpose or one .sendlog database could be created per process without issue?

PD: Thanks for the work on bupstash, works much better than any other backup software I have tried. I'm finally able to backup data that was imposible before because of high resource consumption or really slow backup performace.

unai-ndz avatar Sep 25 '21 15:09 unai-ndz

Currently you can manually use --send-log to fine tune this without changing $HOME.

The main reason we only have one default sendlog per user is to prevent an uncontrolled buildup of files. We could consider having one per repository, but will need to think about how they get cleaned up.

The concurrency limitation is mainly to simplify the implementation as we only remember the previous send (again to put a limit on resource consumption).

andrewchambers avatar Sep 25 '21 23:09 andrewchambers

I will improve the error message to explain this.

andrewchambers avatar Jan 22 '22 23:01 andrewchambers

The main reason we only have one default sendlog per user is to prevent an uncontrolled buildup of files. We could consider having one per repository, but will need to think about how they get cleaned up.

Would it make sense to have a fixed number of N "slots" per user such that one can up to N instances in parallel without specifying --send-log? The idea behind this would be to limit the number of files (i.e. less need to consider cleaning them up) while at the same time covering the typical use cases where 1..4 instances will run in parallel. If anyone attempts to exceed that limit it is likely that they already have their process scripted to some extent so it would be quite acceptable integrate the --send-log parameter in case of even more instances running in parallel.

Btw.: For my use case (backups are scripted already) the --send-log parameter already works nicely enough so this would only seem to be a minor additional enhancement :)

m7a avatar Aug 14 '22 10:08 m7a