bees icon indicating copy to clipboard operation
bees copied to clipboard

Bug: Apparent infinite loop on deduplication

Open har7an opened this issue 2 months ago • 6 comments

Hello,

I've been using bees for a while now to dedupe my filesystems. Recently I started switching from release 0.10 to 0.11 on a few select machines of mine. Yesterday I noticed an issue for the first time where bees was constantly running in the background on 100% CPU utilization (i.e. single core, full load).

At first I suspected my CLI arguments might have been faulty:

--verbose=3 --scan-mode=4 --throttle-factor=0.9 --loadavg-target=10.0 --thread-min=1

my previous usage (with version 0.10) was:

--verbose=3 --scan-mode=1

so I stripped everything after scan-mode=4 away but the problem remained. Next I enabled debug logging (--verbose=7) and repeatedly found the following messages in the output:

2025-10-12 09:52:25 1.31<6> extent_255_0_128K: Crawl started BeesCrawlState 255:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.28<6> extent_253_512K_2M: Crawl started BeesCrawlState 253:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.25<6> extent_251_8M_32M: Crawl started BeesCrawlState 251:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.30<6> extent_254_128K_512K: Crawl started BeesCrawlState 254:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.27<6> extent_252_2M_8M: Crawl started BeesCrawlState 252:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.24<6> extent_250_32M_16E: Crawl started BeesCrawlState 250:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: extsz  datasz point gen_min gen_max this cycle start tm_left   next cycle ETA
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: ----- ------- ----- ------- ------- ---------------- ------- ----------------
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:   max 68.096G  idle       0   70250 2025-10-11 16:37       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:   32M  6.268G  idle   72265   72266 2025-10-12 09:52       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:    8M  3.489G  idle   72265   72266 2025-10-12 09:52       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:    2M 799.28M  idle   72265   72266 2025-10-12 09:52       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:  512K  2.112G  idle   72265   72266 2025-10-12 09:52       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS:  128K 10.254G  idle   72265   72266 2025-10-12 09:52       -                -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: total     91G       gen_now   72266                  updated 2025-10-12 09:52
2025-10-12 09:52:25 1.26<6> extent_254_128K_512K: Crawl finished BeesCrawlState 254:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.30<6> extent_255_0_128K: Crawl finished BeesCrawlState 255:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.28<6> extent_253_512K_2M: Crawl finished BeesCrawlState 253:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.32<6> extent_252_2M_8M: Crawl finished BeesCrawlState 252:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.25<6> extent_251_8M_32M: Crawl finished BeesCrawlState 251:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.24<6> extent_250_32M_16E: Crawl finished BeesCrawlState 250:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.551 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x231000..0x8231000] dst[0x231000..0x8231000]
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: src = 170 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.628 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x232000..0x8232000] dst[0x232000..0x8232000]
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.441 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x233000..0x8233000] dst[0x233000..0x8233000]
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.473 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x234000..0x8234000] dst[0x234000..0x8234000]
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta

I restarted bees multiple times but the problem remained. Next I entirely removed the bees working directory including the previous hash table and had it rerun on the filesystem over night, but it ended up in the same weird state.

I tried creating a MWE by creating a blank, file-backed btrfs and copying only the apparent "faulty" files in, but couldn't reproduce the issue there. Right now I'm having it rerun with --scan-mode=1 to see if it's a generic issue, but that may take a few more hours to complete.

I have no idea what's happening there. I checked the files and they're both ~190 MB in size, which I assume shouldn't cause multiple hours of runtime. The backing storage is an NVMe SSD after all. Do you have an idea what may be causing this? Is there something I should try perhaps?

Thanks in advance!

har7an avatar Oct 12 '25 08:10 har7an

I don't see a loop here:

2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.551 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x231000..0x8231000] dst[0x231000..0x8231000]
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.628 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x232000..0x8232000] dst[0x232000..0x8232000]
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.441 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x233000..0x8233000] dst[0x233000..0x8233000]
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.473 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x234000..0x8234000] dst[0x234000..0x8234000]

It's making progress. But it is going over snapshots. Maybe you are seeing my problem https://github.com/Zygo/bees/issues/320?

Especially flatpak data directories have been a problem for me.

kakra avatar Oct 12 '25 11:10 kakra

#320 is a remarkably thorough explanation of how the implementation works, fascinating. You're right, on closer inspection I can see that the transid in between log messages indeed increases by 1 each. But the progress reports (as seen above) tell me it's all caught up, right? I still don't understand why for all of these transids it's apparently hung up on the same pair of files.

I've let bees "settle" with --scan-mode=1 now and am switching back to --scan-mode=4. I'll keep the verbosity at 7 and report back once it has stopped finding yet more extents to dedupe. Maybe the issue resolves itself now in another over-night run.

har7an avatar Oct 12 '25 15:10 har7an

The ref tag tells us:

  • the extent length is 189M. This tells us that the file was preallocated, because the only way to get an extent longer than 128M in btrfs is due to a kernel bug in preallocate.
  • there are 3 total references to the extent, so it's not spinning on a huge number of references.

Other things we can infer:

  • it's being considered for dedupe, so there's at least one duplicate block somewhere else.
  • it's growing extents up to 128M, so the duplicate copy is fairly large. It could be the extent bees is using to try to dedupe the prealloc portion of this extent.

Prealloc extents are removed upon detection. This is handled before any other deduplication. It will show up in the full debug log when it happens.

Is the file being written in between, or is bees the only active writer touching this file?

Can you post the full debug (-v8) of this loop?

Zygo avatar Oct 12 '25 18:10 Zygo

Prealloc extents are removed upon detection. This is handled before any other deduplication. It will show up in the full debug log when it happens.

That sounds like you're actively fixing certain known kernel bugs. If that's the case I'm amazed and thank you very much. :-)

Is the file being written in between, or is bees the only active writer touching this file?

Afaict bees is the only writer, anything else should only read these files. Whether they have been read from during the logging period posted above I cannot tell unfortunately.

Can you post the full debug (-v8) of this loop?

I'm afraid right now I can't, yesterday it finished and now bees is politely waiting for more work in the background. It appears I've been too impatient... Sorry for the noise, but thank you for the quick responses!

I'll close the issue because the initial "bug" turned out not to be a bug and the problem has gone away now. If either of you is interested in the full debug logs I can probably delete the bees hashtable and have it run again from scratch. Let me know if that's the case, otherwise: Thank you!

har7an avatar Oct 13 '25 08:10 har7an

I'm afraid the issue reappeared. This time I remembered to restart bees with -v8 and recorded ~4 hours of log data, see below. The problem still persists, so if you need more data I should be able to provide that. I've stopped bees on this machine for the time being because it's a laptop and I'd like to keep my battery for a little longer...

2025-11-10_bees-bug.log

har7an avatar Nov 11 '25 07:11 har7an

Hi,

I want to chime in that I'm facing the same issue on both latest master and v0.11. Bees was working overnight for a while, and this morning it hit a particular file that it just got stuck on. In the logs from /run/bees/<uuid>, I see this:

Every 2.0s: cat 6f0151d6-3fc7-4e93-80ad-06f5b23a4e12.status imonurlan: Fri Nov 28 13:37:01 2025

TOTAL:
addr_block=49668555 addr_compressed_offset=22303 addr_eof_e=3205 addr_from_fd=13559941 addr_from_root_fd=2930 addr_hole=1 addr_magic=1 addr_ms=49 addr_uncompressed=49646252
adjust_eof_fail=2 adjust_eof_haystack=3 adjust_eof_hit=1 adjust_eof_needle=3 adjust_exact=2927 adjust_exact_correct=2927 adjust_hit=2927 adjust_try=2930
block_bytes=184987257227 block_hash=40623703 block_ms=582547 block_read=45164507 block_zero=3671
chase_hit=2928 chase_no_data=2 chase_try=2930 chase_uncorrected=2928
crawl_discard_high=3186 crawl_discard_low=2491816 crawl_extent=5059 crawl_flop=992788 crawl_skip=1499028 crawl_skip_ms=116642
dedup_bytes=73728 dedup_copy=4096 dedup_hit=13 dedup_try=13
extent_forward=5636 extent_mapped=5059 extent_ms=19061 extent_ok=5059 extent_ref_ok=5636
hash_erase=1 hash_evict=4813 hash_extent_in=131072 hash_extent_out=1522 hash_front=4 hash_front_already=7 hash_insert=36063566 hash_lookup=40575624
open_clear=36 open_file=4164 open_hit=4164 open_ino_ms=6809 open_lookup_ok=4164
pairbackward_hit=4473609 pairbackward_miss=2776 pairbackward_ms=500628 pairbackward_stop=2927 pairbackward_try=4476388 pairbackward_zero=2
pairforward_eof_first=1 pairforward_eof_malign=14 pairforward_hit=44409 pairforward_miss=2757 pairforward_ms=4211 pairforward_stop=2927 pairforward_try=47181
progress_ok=144
readahead_bytes=26683483512 readahead_clear=246 readahead_count=27562 readahead_fail=978 readahead_ms=39898 readahead_skip=12145 readahead_unread_ms=14
replacedst_dedup_hit=2414 replacedst_grown=1175 replacedst_try=2927
replacesrc_dedup_hit=2 replacesrc_try=1
resolve_ms=459 resolve_ok=1211
root_clear=36
scan_already=146 scan_block=36105684 scan_compressed_no_dedup=747 scan_dup_block=14412 scan_dup_hit=2925 scan_extent=5638 scan_forward=5636 scan_found=193165 scan_hash_hit=2925 scan_hash_insert=36063565
scan_hash_miss=1 scan_hash_preinsert=36102015 scan_lookup=36057606 scan_push_front=11 scan_reinsert=1 scan_resolve_hit=2927 scan_rewrite=1 scan_seen_erase=11 scan_seen_hit=329 scan_seen_insert=5151 scan_seen_m
iss=5309 scan_skip_bytes=741 scan_skip_ops=8 scan_zero=3669
scanf_extent=5637 scanf_extent_ms=919106 scanf_total=5635 scanf_total_ms=928511
tmp_aligned=1 tmp_block=1 tmp_bytes=4096 tmp_copy=1 tmp_create=1 tmp_create_ms=8 tmp_resize=3
RATES:
addr_block=31245.6 addr_compressed_offset=14.031 addr_eof_e=2.017 addr_from_fd=8530.3 addr_from_root_fd=1.844 addr_hole=0.001 addr_magic=0.001 addr_ms=0.031 addr_uncompressed=31231.5
adjust_eof_fail=0.002 adjust_eof_haystack=0.002 adjust_eof_hit=0.001 adjust_eof_needle=0.002 adjust_exact=1.842 adjust_exact_correct=1.842 adjust_hit=1.842 adjust_try=1.844
block_bytes=1.16372e+08 block_hash=25555.6 block_ms=366.47 block_read=28412.1 block_zero=2.31
chase_hit=1.842 chase_no_data=0.002 chase_try=1.844 chase_uncorrected=1.842
crawl_discard_high=2.005 crawl_discard_low=1567.56 crawl_extent=3.183 crawl_flop=624.545 crawl_skip=943.011 crawl_skip_ms=73.378
dedup_bytes=46.381 dedup_copy=2.577 dedup_hit=0.009 dedup_try=0.009
extent_forward=3.546 extent_mapped=3.183 extent_ms=11.991 extent_ok=3.183 extent_ref_ok=3.546
hash_erase=0.001 hash_evict=3.028 hash_extent_in=82.455 hash_extent_out=0.958 hash_front=0.003 hash_front_already=0.005 hash_insert=22686.9 hash_lookup=25525.4
open_clear=0.023 open_file=2.62 open_hit=2.62 open_ino_ms=4.284 open_lookup_ok=2.62
pairbackward_hit=2814.26 pairbackward_miss=1.747 pairbackward_ms=314.936 pairbackward_stop=1.842 pairbackward_try=2816.01 pairbackward_zero=0.002
pairforward_eof_first=0.001 pairforward_eof_malign=0.009 pairforward_hit=27.937 pairforward_miss=1.735 pairforward_ms=2.65 pairforward_stop=1.842 pairforward_try=29.681
progress_ok=0.091
readahead_bytes=1.67861e+07 readahead_clear=0.155 readahead_count=17.339 readahead_fail=0.616 readahead_ms=25.1 readahead_skip=7.641 readahead_unread_ms=0.009
replacedst_dedup_hit=1.519 replacedst_grown=0.74 replacedst_try=1.842
replacesrc_dedup_hit=0.002 replacesrc_try=0.001
resolve_ms=0.289 resolve_ok=0.762
root_clear=0.023
scan_already=0.092 scan_block=22713.4 scan_compressed_no_dedup=0.47 scan_dup_block=9.067 scan_dup_hit=1.841 scan_extent=3.547 scan_forward=3.546 scan_found=121.517 scan_hash_hit=1.841 scan_hash_insert=2
2686.9 scan_hash_miss=0.001 scan_hash_preinsert=22711.1 scan_lookup=22683.2 scan_push_front=0.007 scan_reinsert=0.001 scan_resolve_hit=1.842 scan_rewrite=0.001 scan_seen_erase=0.007 scan_seen_hit=0.207 scan_see
n_insert=3.241 scan_seen_miss=3.34 scan_skip_bytes=0.467 scan_skip_ops=0.006 scan_zero=2.309
scanf_extent=3.547 scanf_extent_ms=578.193 scanf_total=3.545 scanf_total_ms=584.109
tmp_aligned=0.001 tmp_block=0.001 tmp_bytes=2.577 tmp_copy=0.001 tmp_create=0.001 tmp_create_ms=0.006 tmp_resize=0.002
THREADS (work queue 1 of 9 tasks, 1 workers, load: current 0 target 0 average 0):
tid 14538: bees: [1589.62s] waiting for signals
tid 14566: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 50.883M src[0x4da7000..0x8089000] dst[0x4da7000..0x8089000]
src = 17 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 76 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4
tid 14567: progress_report: [1589.62s] idle 3600
tid 14568: status_report: writing status to file '/run/bees//6f0151d6-3fc7-4e93-80ad-06f5b23a4e12.status'
tid 14569: hash_writeback: flush rate limited after extent #3531 of 131072 extents
tid 14570: hash_prefetch: [1581.6s] idle 3600s
tid 14571: crawl_transid: [33.075s] waiting 36.6009s for next transid RateEstimator { count = 3306, raw = 35.3053 / 1292.21, ratio = 35.3053 / 1325.28, rate = 0.0266398, duration(1) = 37.5378, seconds_f
or(1) = 4.46318 }
tid 14572: crawl_writeback: [688.928s] idle, clean
PROGRESS:
extsz datasz point gen_min gen_max this cycle start tm_left next cycle ETA

max 1014.28G 898759 0 3264 2025-11-28 13:10 2m 4s 2025-11-28 13:31
32M 164.682G 887601 0 3264 2025-11-28 13:10 2m 20s 2025-11-28 13:31
8M 121.757G 080577 0 3264 2025-11-28 13:10 3h 30m 2025-11-28 16:59
2M 16.373G 080484 0 3264 2025-11-28 13:10 3h 30m 2025-11-28 16:59
512K 5.792T 000054 0 3264 2025-11-28 13:10 - -
128K 110.962G 000145 0 3264 2025-11-28 13:10 12w 3d 2026-02-24 09:41
total 7.187T gen_now 3293 updated 2025-11-28 13:28

Note this line:

tid 14566: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 50.883M src[0x4da7000..0x8089000] dst[0x4da7000..0x8089000]
src = 17 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 76 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4

It is stuck on these files with full CPU usage on that thread, and over time the pairbackward_hit and pairbackward_try counter is going up by ~15000/s, but bees is not progressing. The line above keeps changing every few seconds like this.

        tid 30862: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 3.594M src[0x7c72000..0x800a000] dst[0x7c72000..0x800a000]
src = 6 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 762 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4


        tid 30862: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 112.801M src[0xf47000..0x8014000] dst[0xf47000..0x8014000]
src = 6 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 762 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4

        tid 30862: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 71.965M src[0x3820000..0x8017000] dst[0x3820000..0x8017000]
src = 6 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 762 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4

        tid 30862: ref_70bc1d00000_167.426M_1: Extending matching range: BeesRangePair: 79.813M src[0x304f000..0x801f000] dst[0x304f000..0x801f000]
src = 6 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
dst = 762 backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4


For what it's worth:

caseymdk@imonurlan:/media/caseymdk/MAIN$ filefrag -v backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
Filesystem type is: 9123683e
File size of backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4 is 257699738 (62915 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   62914: 1869356288..1869419202:  62915:             last,eof
backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4: 1 extent found
caseymdk@imonurlan:/media/caseymdk/MAIN$ filefrag -v backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4
Filesystem type is: 9123683e
File size of backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4 is 175556549 (42861 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   42860: 1891376384..1891419244:  42861:             last,eof
backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4: 1 extent found
caseymdk@imonurlan:/media/caseymdk/MAIN$ sudo compsize backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/20/24/2024cacf-eff7-4c4b-99d8-3744419ad3f7.mp4
Processed 1 file, 1 regular extents (1 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL      100%      245M         245M         245M       
none       100%      245M         245M         245M       
caseymdk@imonurlan:/media/caseymdk/MAIN$ sudo compsize backups/hot/immich/library/upload/a0dd25f7-c09b-46a9-a674-4adadc7686ca/2b/59/2b590bd9-287e-4e42-a2e4-c650a2897b47.mp4
Processed 1 file, 1 regular extents (1 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL      100%      167M         167M         167M       
none       100%      167M         167M         167M       
caseymdk@imonurlan:/media/caseymdk/MAIN$ 

caseymdk avatar Nov 28 '25 19:11 caseymdk