bees
bees copied to clipboard
Slow progress on crawl with many snapshots containing few large files
I have a volume with many snapshots and just a few large files. The files are very similar in the snapshots. When beesd crawls the snapshots, most of the time all but one thread is waiting for the same extent:
tid 54153: crawl_8509: waiting for extent bytenr 0x4629620e000
tid 54154: crawl_8509: waiting for extent bytenr 0x4629620e000
tid 54155: crawl_8508: Extending matching range: BeesRangePair: 4K src[0x158797c000..0x158797d000] dst[0x136728c000..0x136728d000]
src = 11 /run/bees/mnt/b6b05726-d5a0-4b22-b22a-82625f0e791e/.snapshots/33/snapshot/system.img.7
dst = 25 /run/bees/mnt/b6b05726-d5a0-4b22-b22a-82625f0e791e/.snapshots/29/snapshot/system.img.7
tid 54156: crawl_8508: waiting for extent bytenr 0x4629620e000
tid 54157: crawl_8509: waiting for extent bytenr 0x4629620e000
tid 54158: crawl_8509: waiting for extent bytenr 0x4629620e000
tid 54159: crawl_8509: waiting for extent bytenr 0x4629620e000
tid 54160: crawl_8508: waiting for extent bytenr 0x4629620e000
I hazard a guess that they are all waiting for the extent that is currently in use by tid 54155, and is probably protected by a lock.
How can I avoid this behavior? I guess it will go away once everything is crawled, as then only one snapshot will be crawled at a time for changes, but the current progress is very slow, and every time I stop bees, it feels like a lot of this progress is lost. Would it be faster if only one or two threads are assigned to bees, at least until the first crawling is done? Or to switch to -m2 to avoid all threads accessing the same extents?
For a short term solution, switch to -m1, and also change src/bees.h:
const size_t BEES_MAX_CRAWL_BATCH = 1;
This won't eliminate lock contention, but it will minimize it.
For a long term solution, I'm implementing a more sophisticated job scheduler that understands lock dependencies and redistributes tasks among worker threads to avoid blocking. When a lock dependency is found, the scheduler will abort and requeue the blocked task for the worker thread that holds the lock, so that the worker thread that would have blocked can move on to some other task in the queue instead.
When you stop bees with SIGTERM, it should be recording its position in the filesystem and restarting from precisely the same spot. You can check this by looking at beescrawl.dat. Make sure systemd doesn't time out and kill bees with SIGKILL before it finishes, or bees will revert to its previously completed checkpoint and repeat up to 15 minutes of work.