`dirent points to inode that does not point back` - `inconsistency detected - emergency read only`
I'm running bcachefs as a root filesystem on an Archlinux desktop. I just updated to kernel 6.10.0-arch1-2. I can boot fine, but after running for a few minutes I get errors like:
$ touch foo
touch: cannot touch 'foo': Read-only file system
Digging a bit via sudo dmesg | grep bcachefs gives:
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-linux root=/dev/sdc1:/dev/nvme0n1p3 rootfstype=bcachefs rw
[ 0.150223] Kernel command line: BOOT_IMAGE=/vmlinuz-linux root=/dev/sdc1:/dev/nvme0n1p3 rootfstype=bcachefs rw
[ 22.815889] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): mounting version 1.7: mi_btree_bitmap opts=background_compression=zstd:15,metadata_target=ssd.nvme-3,foreground_target=ssd,back
ground_target=hdd,promote_target=ssd.nvme-3,journal_flush_disabled
[ 22.815904] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): recovering from unclean shutdown
[ 24.097353] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): journal read done, replaying entries 1155482-1163092
[ 24.097366] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dropped unflushed entries 1163093-1163093
[ 28.711849] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): alloc_read... done
[ 28.720965] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): stripes_read... done
[ 28.720974] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): snapshots_read... done
[ 28.748925] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): going read-write
[ 28.750883] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): journal_replay... done
[ 34.721586] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): resume_logged_ops... done
[ 34.721592] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): delete_dead_inodes... done
[ 141.558788] WARNING: CPU: 7 PID: 3104 at fs/bcachefs/btree_iter.c:2996 bch2_trans_srcu_unlock+0x120/0x130 [bcachefs]
[ 141.558868] processor_thermal_mbox snd i2c_smbus ptp intel_uncore thunderbolt pcspkr mtd wmi_bmof i2c_mux pps_core soundcore idma64 rfkill intel_vsec int340x_thermal_zone serial_multi_ins
tantiate roles mac_hid mei pmt_class pinctrl_alderlake acpi_tad acpi_pad acpi_thermal_rel nvidia(POE) sg crypto_user dm_mod loop nfnetlink ip_tables x_tables bcachefs libcrc32c crc32c_generic
lz4_compress lz4hc_compress xor raid6_pq hid_logitech_hidpp hid_logitech_dj hid_generic usbhid crc32c_intel nvme sha256_ssse3 spi_intel_pci nvme_core spi_intel xhci_pci nvme_auth xhci_pci_re
nesas video wmi
[ 141.558883] CPU: 7 PID: 3104 Comm: bcachefs Tainted: P OE 6.10.0-arch1-2 #1 ec818e96762f5a8ef3adc527a4740ba5b3ca4df5
[ 141.558885] RIP: 0010:bch2_trans_srcu_unlock+0x120/0x130 [bcachefs]
[ 141.558914] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.558937] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.558966] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.558984] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559002] bch2_trans_begin+0x575/0x790 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559020] ? bch2_trans_begin+0xe3/0x790 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559038] bch2_btree_iter_peek_node_and_restart+0x44/0x50 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559056] bch2_get_btree_in_memory_pos+0x1d6/0x2d0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559072] bch2_check_backpointers_to_extents+0xe2/0x600 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559088] ? __pfx_thread_with_stdio_fn+0x10/0x10 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559117] ? __bch2_print+0xdb/0xf0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559143] ? __pfx_thread_with_stdio_fn+0x10/0x10 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559166] bch2_run_recovery_pass+0x35/0xa0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559195] bch2_run_online_recovery_passes+0x39/0x70 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559221] bch2_fsck_online_thread_fn+0x72/0x150 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 141.559245] thread_with_stdio_fn+0x1a/0x60 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740571] WARNING: CPU: 3 PID: 3104 at fs/bcachefs/btree_iter.c:2996 bch2_trans_srcu_unlock+0x120/0x130 [bcachefs]
[ 152.740644] processor_thermal_mbox snd i2c_smbus ptp intel_uncore thunderbolt pcspkr mtd wmi_bmof i2c_mux pps_core soundcore idma64 rfkill intel_vsec int340x_thermal_zone serial_multi_ins
tantiate roles mac_hid mei pmt_class pinctrl_alderlake acpi_tad acpi_pad acpi_thermal_rel nvidia(POE) sg crypto_user dm_mod loop nfnetlink ip_tables x_tables bcachefs libcrc32c crc32c_generic
lz4_compress lz4hc_compress xor raid6_pq hid_logitech_hidpp hid_logitech_dj hid_generic usbhid crc32c_intel nvme sha256_ssse3 spi_intel_pci nvme_core spi_intel xhci_pci nvme_auth xhci_pci_re
nesas video wmi
[ 152.740659] CPU: 3 PID: 3104 Comm: bcachefs Tainted: P W OE 6.10.0-arch1-2 #1 ec818e96762f5a8ef3adc527a4740ba5b3ca4df5
[ 152.740661] RIP: 0010:bch2_trans_srcu_unlock+0x120/0x130 [bcachefs]
[ 152.740691] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740714] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740742] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740760] ? bch2_trans_srcu_unlock+0x120/0x130 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740778] bch2_trans_begin+0x575/0x790 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740795] ? bch2_trans_begin+0xe3/0x790 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740813] bch2_btree_iter_peek_node_and_restart+0x44/0x50 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740831] bch2_get_btree_in_memory_pos+0x1d6/0x2d0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740847] bch2_check_backpointers_to_extents+0xe2/0x600 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740864] ? __pfx_thread_with_stdio_fn+0x10/0x10 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740892] ? __bch2_print+0xdb/0xf0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740918] ? __pfx_thread_with_stdio_fn+0x10/0x10 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740942] bch2_run_recovery_pass+0x35/0xa0 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740970] bch2_run_online_recovery_passes+0x39/0x70 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.740997] bch2_fsck_online_thread_fn+0x72/0x150 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 152.741020] thread_with_stdio_fn+0x1a/0x60 [bcachefs ad0481b6eedd8c3a5c11cdebcb5b861ace49ebbf]
[ 698.233288] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.234301] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): inconsistency detected - emergency read only at journal seq 1165519
[ 698.234350] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.234363] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.264544] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.264560] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.264570] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.289149] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.289162] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.289172] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.381442] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.381455] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.381465] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): dirent points to inode that does not point back:
[ 698.517589] bcachefs (e178eec0-b487-43d7-99e0-decb5eeca04f): unshutdown complete, journal seq 1165519
I tried sudo mount -o fsck,fix_errors,remount,rw / but that doesn't seem to do anything.
My bcachefs filesystem lives on /dev/nvme0n1p3 for foreground writes and as a promote target, and an /dev/sdc1 (a hard disk) for background writes:
$ sudo bcachefs show-super /dev/sdc1
Device: (unknown device)
External UUID: e178eec0-b487-43d7-99e0-decb5eeca04f
Internal UUID: 69a01c2f-f34c-4069-875c-77a8e89acc28
Magic number: c68573f6-66ce-90a9-d96a-60cf803df7ef
Device index: 0
Label: (none)
Version: 1.7: mi_btree_bitmap
Version upgrade complete: 1.7: mi_btree_bitmap
Oldest version on disk: 1.7: mi_btree_bitmap
Created: Sat Jul 13 15:51:26 2024
Sequence number: 911
Time of last write: Wed Jul 24 16:13:37 2024
Superblock size: 6.04 KiB/1.00 MiB
Clean: 0
Devices: 2
Sections: members_v1,replicas_v0,disk_groups,clean,journal_seq_blacklist,journal_v2,counters,members_v2,errors,ext,downgrade
Features: zstd,journal_seq_blacklist_v3,reflink,new_siphash,inline_data,new_extent_overwrite,btree_ptr_v2,extents_above_btree_updates,btree_updates_journalled,reflink_inline_data,new_varint,journal_no_flush,alloc_v2,extents_across_btree_nodes
Compat features: alloc_info,alloc_metadata,extents_above_btree_updates_done,bformat_overflow_done
Options:
block_size: 4.00 KiB
btree_node_size: 256 KiB
errors: continue [fix_safe] panic ro
metadata_replicas: 1
data_replicas: 1
metadata_replicas_required: 1
data_replicas_required: 1
encoded_extent_max: 64.0 KiB
metadata_checksum: none [crc32c] crc64 xxhash
data_checksum: none [crc32c] crc64 xxhash
compression: none
background_compression: zstd:15
str_hash: crc32c crc64 [siphash]
metadata_target: ssd.nvme-3
foreground_target: ssd
background_target: hdd
promote_target: ssd.nvme-3
erasure_code: 0
inodes_32bit: 1
shard_inode_numbers: 1
inodes_use_key_cache: 1
gc_reserve_percent: 8
gc_reserve_bytes: 0 B
root_reserve_percent: 0
wide_macs: 0
acl: 1
usrquota: 0
grpquota: 0
prjquota: 0
journal_flush_delay: 1000
journal_flush_disabled: 1
journal_reclaim_delay: 100
journal_transaction_names: 1
version_upgrade: [compatible] incompatible none
nocow: 0
members_v2 (size 592):
Device: 0
Label: hdd-1 (5)
UUID: 760707e7-329c-45a4-a165-35ad9e8659f6
Size: 6.59 TiB
read errors: 0
write errors: 0
checksum errors: 0
seqread iops: 0
seqwrite iops: 0
randread iops: 0
randwrite iops: 0
Bucket size: 512 KiB
First bucket: 0
Buckets: 13827164
Last mount: Wed Jul 24 16:13:31 2024
Last superblock write: 911
State: rw
Data allowed: journal,btree,user
Has data: btree,user
Btree allocated bitmap blocksize: 32.0 MiB
Btree allocated bitmap: 0000011111111111111111111111111111111100111111110010000100000001
Durability: 1
Discard: 1
Freespace initialized: 1
Device: 2
Label: nvme-3 (6)
UUID: e1417382-7fe8-4843-be77-917acfb7f9df
Size: 931 GiB
read errors: 0
write errors: 0
checksum errors: 0
seqread iops: 0
seqwrite iops: 0
randread iops: 0
randwrite iops: 0
Bucket size: 512 KiB
First bucket: 0
Buckets: 1906688
Last mount: Wed Jul 24 16:13:31 2024
Last superblock write: 911
State: rw
Data allowed: journal,btree,user
Has data: journal,btree,user,cached
Btree allocated bitmap blocksize: 32.0 MiB
Btree allocated bitmap: 0000001111111111111111111111111111111111111111111111111111111111
Durability: 1
Discard: 1
Freespace initialized: 1
errors (size 40):
alloc_key_to_missing_lru_entry 721 Fri Jul 19 12:21:07 2024
inode_unreachable 7 Fri Jul 19 12:20:23 2024
Update: I booted from a USB stick. Running fsck multiple times in a row keeps finding problems to fix, weirdly enough.
I'm turning background compression off, just to make the system simpler and reduce the 'surface' of things that can go wrong.
Next update: have booted my normal system again. Running fsck multiple times does not keep finding problems. It's stable so far. Perhaps it randomly fixed itself, or turning off background_compression did the trick? I'll be monitoring.
I still have background_compression off, and haven't hit this problem again, since.
Curious about the 'dirent points to inode that does not point back' subject? It doesn't match the rest of the report.
I just pushed "bcachefs: Convert for_each_btree_node() to lockrestart_do()" to the testing branch to fix the SRCU warning.
The subject line is a quote of what's in dmesg.
Not sure how related this is but I started consistently receiving the same error when starting steam. Specifically it seems to always occur during the steam client update.
Error message:
bcachefs (93879ca1-9ad1-47b1-866e-623b60c911d9): dirent points to inode that does not point back:
u64s 8 type dirent 673049566:1177845408488155637:4294967288 len 0 ver 0: ldconfig -> 1612331176 type reg
inum: 1612331176 mode=100755
flags= (15300000)
journal_seq=10674964
bi_size=991016
bi_sectors=1936
bi_version=0
bi_atime=12537025645715285
bi_ctime=18373447315757940
bi_mtime=6924161156869999
bi_otime=12537025645715285
bi_uid=1000
bi_gid=1000
bi_nlink=0
bi_generation=0
bi_dev=0
bi_data_checksum=0
bi_compression=0
bi_project=0
bi_background_compression=0
bi_data_replicas=0
bi_promote_target=0
bi_foreground_target=0
bi_background_target=0
bi_erasure_code=0
bi_fields_set=0
bi_dir=1479355554
bi_dir_offset=5768648552471826948
bi_subvol=0
bi_parent_subvol=0
bi_nocow=0
bcachefs (93879ca1-9ad1-47b1-866e-623b60c911d9): inconsistency detected - emergency read only at journal seq 10888398
bcachefs (93879ca1-9ad1-47b1-866e-623b60c911d9): unshutdown complete, journal seq 10888398
Versions: Kernel: 6.10.8-linux-cachyos Bcachefs-tools: 1.11.0
You guys running online fsck?
I just hit it for the first time last night, after running online fsck.
You guys running online fsck?
Yes
I was also running online fsck, during boot.
yeah, I see the bug - when we find an unlinked inode we're not checking if it's still open (because if it's not an online fsck, why would it be?)
this'll be tricky to fix, because vfs inodes are indexed by subvolume, and we only have the snapshot id in fsck
Btw, is it expected that background compression hits more bugs like these, or was that just a coincidence?
background compression wouldn't have anything to do with this, no
I've also hit this today, I'll try an offline fsck then. Any progress on fixing this?
Was online fsck involved?
Was online fsck involved?
Yep, it was. I ran that yesterday I think?
After upgrading to Linux 6.12.1-arch1-1 I am getting this problem again.
This makes my system pretty much unusable. Turning off compression doesn't seem to help, this time.
I wrote a test to run online fsck with open unlinked files and so far I haven't been able to reproduce. Were snapshots involved? Anything else you might be able to add?
No snapshots involved. This time it was independent of fsck, and seemed to happen when I tried to access specific files (mostly stuff in my Steam directory). Even trying to delete them triggered the 'emergency ready-only' response.
Fsck was not involved. (I tried rebooting and running it, but it mostly exited with success without actually fixing the issue.)
Anything else you might be able to add?
I fixed it for now, by rebooting into a rescue system, mounting there, and deleting the files. That used version 12 instead of 13, so perhaps that's why that worked? (Because it's using Linux 6.11 instead of 6.12 in the rescue system, I think.)
I only saw the problem when I recently upgraded my desktop to Linux 6.12.
this is kinda concerning. I was thinking of setting up a bcfs root using a falling 250 ssd, a working 240 SSD, and a 250 working HDD, utilizing write mostly. might just not do it and do PXE boot instead. this is for a machine without ECC that I imtend to have lan'd to a ddr4 ecc machine over 2.5GbE
I hit this issue.
For testing performance, I changed compression and background_compression option and use bcachefs set-file-option to enable/disable compression for files multiple times. I also changed foreground/background/metadata target for testing.
I also ran online fsck.
Same issue here (kernel 6.14.0-arch1-1), also hits while I'm trying to start Steam. I think this should be addressed at least with a workaround.
6.15 has the online repair code for this, all that's left is flipping on self healing (when not running fsck).
Could you guys post your logs of when this happens? I need to look for patterns.
On Fri, Apr 4, 2025 at 5:18 AM Egor Vorontsov @.***> wrote:
Same issue here, also hits while I'm trying to start Steam. I think this should be addressed at least with a workaround.
— Reply to this email directly, view it on GitHub https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPGX3X5LRUFY4CPLZVWIWD2XZE5LAVCNFSM6AAAAABLL7T6YSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZYGA2DGNJUHA . You are receiving this because you modified the open/close state.Message ID: @.***> [image: egormanga]egormanga left a comment (koverstreet/bcachefs#718) https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548
Same issue here, also hits while I'm trying to start Steam. I think this should be addressed at least with a workaround.
— Reply to this email directly, view it on GitHub https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPGX3X5LRUFY4CPLZVWIWD2XZE5LAVCNFSM6AAAAABLL7T6YSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZYGA2DGNJUHA . You are receiving this because you modified the open/close state.Message ID: @.***>
Also, get me logs of the fsck runs
On Fri, Apr 4, 2025 at 6:27 AM Kent Overstreet @.***> wrote:
6.15 has the online repair code for this, all that's left is flipping on self healing (when not running fsck).
Could you guys post your logs of when this happens? I need to look for patterns.
On Fri, Apr 4, 2025 at 5:18 AM Egor Vorontsov @.***> wrote:
Same issue here, also hits while I'm trying to start Steam. I think this should be addressed at least with a workaround.
— Reply to this email directly, view it on GitHub https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPGX3X5LRUFY4CPLZVWIWD2XZE5LAVCNFSM6AAAAABLL7T6YSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZYGA2DGNJUHA . You are receiving this because you modified the open/close state.Message ID: @.***> [image: egormanga]egormanga left a comment (koverstreet/bcachefs#718) https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548
Same issue here, also hits while I'm trying to start Steam. I think this should be addressed at least with a workaround.
— Reply to this email directly, view it on GitHub https://github.com/koverstreet/bcachefs/issues/718#issuecomment-2778043548, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPGX3X5LRUFY4CPLZVWIWD2XZE5LAVCNFSM6AAAAABLL7T6YSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZYGA2DGNJUHA . You are receiving this because you modified the open/close state.Message ID: @.***>
Also, get me logs of the fsck runs
Sure. Userspace or in-kernel?
Anyway, here goes both:
egormanga@Beast:~$ sudo bcachefs fsck -v /dev/mapper/root
Running fsck online
bcachefs (dm-0): check_alloc_info... done
bcachefs (dm-0): check_lrus... done
bcachefs (dm-0): check_btree_backpointers... done
bcachefs (dm-0): check_backpointers_to_extents...bcachefs (dm-0): backpointers_to_extents: 21%, done 1263/5985 nodes, at backpointers:0:610281324544:0
bcachefs (dm-0): backpointers_to_extents: 62%, done 3755/5985 nodes, at backpointers:0:1779630645248:0
bcachefs (dm-0): backpointers_to_extents: 99%, done 5926/5985 nodes, at backpointers:0:2901445943296:0
done
bcachefs (dm-0): check_extents_to_backpointers... done
bcachefs (dm-0): check_alloc_to_lru_refs... done
bcachefs (dm-0): check_snapshot_trees... done
bcachefs (dm-0): check_snapshots... done
bcachefs (dm-0): check_subvols... done
bcachefs (dm-0): check_subvol_children... done
bcachefs (dm-0): delete_dead_snapshots... done
bcachefs (dm-0): check_indirect_extents... done
bcachefs (dm-0): check_root... done
bcachefs (dm-0): check_subvolume_structure... done
bcachefs (dm-0): check_directory_structure... done
egormanga@Beast:~$ sudo bcachefs fsck -kv /dev/mapper/root
Running fsck online
bcachefs (dm-0): check_alloc_info... done
bcachefs (dm-0): check_lrus... done
bcachefs (dm-0): check_btree_backpointers... done
bcachefs (dm-0): check_backpointers_to_extents...bcachefs (dm-0): backpointers_to_extents: 35%, done 2118/5985 nodes, at backpointers:0:997799608320:0
bcachefs (dm-0): backpointers_to_extents: 74%, done 4471/5985 nodes, at backpointers:0:2222493532160:0
done
bcachefs (dm-0): check_extents_to_backpointers... done
bcachefs (dm-0): check_alloc_to_lru_refs... done
bcachefs (dm-0): check_snapshot_trees... done
bcachefs (dm-0): check_snapshots... done
bcachefs (dm-0): check_subvols... done
bcachefs (dm-0): check_subvol_children... done
bcachefs (dm-0): delete_dead_snapshots... done
bcachefs (dm-0): check_indirect_extents... done
bcachefs (dm-0): check_root... done
bcachefs (dm-0): check_subvolume_structure... done
bcachefs (dm-0): check_directory_structure... done
Example of my case, if you mean this:
[ 110.626151] bcachefs (dm-0): dirent points to inode that does not point back:
u64s 9 type dirent 1342507576:2365319241113688128:4294967289 len 0 ver 0: libudev.so.1.7.0 -> 1208207331 type reg
inum: 1208207331:4294967289
mode=100644
flags=(15300000)
journal_seq=18837376
hash_seed=d377b816d9179f00
hash_type=siphash
bi_size=157904
bi_sectors=312
bi_version=0
bi_atime=13038696599760317
bi_ctime=13038696522760276
bi_mtime=18440967421964499545
bi_otime=8907017921576123
bi_uid=1000
bi_gid=1000
bi_nlink=0
bi_generation=0
bi_dev=0
bi_data_checksum=0
bi_compression=0
bi_project=0
bi_background_compression=0
bi_data_replicas=0
bi_promote_target=0
bi_foreground_target=0
bi_background_target=0
bi_erasure_code=0
bi_fields_set=0
bi_dir=919803
bi_dir_offset=4639968065783051684
bi_subvol=0
bi_parent_subvol=0
bi_nocow=0
bi_depth=0
bi_inodes_32bit=0
6.15 has the online repair code for this
Can I try out this specific patch on 6.14?
I'm running into a similar problem, where when I try to delete a directory it becomes read only. I've created a gist with the output from bcachefs fsck on all three devices, bcachefs show-super on all three devices, the rm command that turns the system read-only, and the full output of dmesg since boot: https://gist.github.com/leroycep/3c82b7739d431a2a64bdf0ec19efb7fc
If this was snaphots & subvolumes related, the fix is now in Linus's tree, and latest tools - give that a try.
In my case I never used snapshots, and I can't remember subvoluming this fs (btw, is there a way to list subvolumes from userspace?). I will try out the latest tree anyway, thank you.
If you hit it again - feed me more logs
Since running kernel 6.15 in Archlinux I haven't hit this problem for a while, despite turning background compression back on. So I'm closing this issue for now.