bcachefs
bcachefs copied to clipboard
`*_target` doesn't seem to work [f37513acffac]
f37513acffac21a08b7d8078c692ce719d84166a
I have FS with 2x20TB HDDs (hdd.hdd1
& hdd.hdd2
) & 2x250GB SSDs (ssd.ssd1
& ssd.ssd2
) with
metadata_target: ssd
foreground_target: ssd
background_target: hdd
promote_target: ssd
but it doesn't seem to work, cached data (promote, I guess?) stays mostly on HDDs, metadata is written mostly on HDDs & considering user
SSD usage during writes it also doesn't work as foreground target.
I didn't expected to see cached data on HDDs at all, for example, because they're already storing the [background] data and there is no point of promoted data storage (AFAIU it's not recompressed for cache, so we don't even save any CPU).
$ bcachefs fs usage -h /mnt/bcachefs/
Filesystem: 360fc60c-8c44-4f3e-9cc4-fbaeee9e7c3b
Size: 34.5 TiB
Used: 17.5 TiB
Online reserved: 728 KiB
Data type Required/total Devices
reserved: 1/0 [] 13.9 MiB
btree: 1/3 [sde1 sdc1 sdd1] 1.50 MiB
btree: 1/3 [sda1 sde1 sdd1] 57.3 GiB
btree: 1/3 [sda1 sde1 sdc1] 126 GiB
user: 1/1 [sde1] 4.61 TiB
user: 1/1 [sda1] 12.2 TiB
user: 1/1 [sdc1] 2.66 GiB
user: 1/2 [sda1 sde1] 476 GiB
cached: 1/1 [sda1] 222 GiB
cached: 1/1 [sdc1] 11.2 GiB
cached: 1/1 [sde1] 298 GiB
cached: 1/1 [sdd1] 5.00 MiB
hdd.hdd1 (device 0): sda1 rw
data buckets fragmented
free: 0 B 11142926
sb: 3.00 MiB 7 508 KiB
journal: 4.00 GiB 8192
btree: 61.0 GiB 135762 5.33 GiB
user: 12.5 TiB 26136844 3.02 GiB
cached: 222 GiB 458121
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 1
erasure coded: 0 B 0
capacity: 18.1 TiB 37881853
hdd.hdd2 (device 1): sde1 rw
data buckets fragmented
free: 0 B 13457844
sb: 3.00 MiB 4 1020 KiB
journal: 8.00 GiB 8192
btree: 61.0 GiB 76895 14.1 GiB
user: 4.84 TiB 5093213 12.0 GiB
cached: 298 GiB 304777
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 1
erasure coded: 0 B 0
capacity: 18.1 TiB 18940926
sdd.ssd1 (device 2): sdc1 rw
data buckets fragmented
free: 0 B 353919
sb: 3.00 MiB 7 508 KiB
journal: 1.82 GiB 3726
btree: 41.8 GiB 90737 2.46 GiB
user: 2.66 GiB 5463 6.92 MiB
cached: 11.2 GiB 23096
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
erasure coded: 0 B 0
capacity: 233 GiB 476948
sdd.ssd1 (device 3): sdd1 ro
data buckets fragmented
free: 0 B 428180
sb: 3.00 MiB 7 508 KiB
journal: 1.82 GiB 3726
btree: 19.1 GiB 45025 2.87 GiB
user: 0 B 0
cached: 5.00 MiB 10
parity: 0 B 0
stripe: 0 B 0
need_gc_gens: 0 B 0
need_discard: 0 B 0
erasure coded: 0 B 0
capacity: 233 GiB 476948
$ bcachefs show-super /dev/sdd1
External UUID: 360fc60c-8c44-4f3e-9cc4-fbaeee9e7c3b
Internal UUID: bc05affd-9fd1-4eb5-b497-3f7956ac57d2
Device index: 3
Label:
Version: 1.1: snapshot_skiplists
Version upgrade complete: 1.1: snapshot_skiplists
Oldest version on disk: 0.29: snapshot_trees
Created: Fri Jun 16 22:38:16 2023
Sequence number: 255
Superblock size: 6040
Clean: 0
Devices: 4
Sections: members,replicas_v0,quota,disk_groups,clean,journal_seq_blacklist,journal_v2,counters
Features: zstd,journal_seq_blacklist_v3,new_siphash,inline_data,new_extent_overwrite,btree_ptr_v2,extents_above_btree_updates,btree_updates_journalled,new_varint,journal_no_flush,alloc_v2,extents_across_btree_nodes
Compat features: alloc_info,alloc_metadata,extents_above_btree_updates_done,bformat_overflow_done
Options:
block_size: 4.00 KiB
btree_node_size: 256 KiB
errors: continue [ro] panic
metadata_replicas: 3
data_replicas: 1
metadata_replicas_required: 1
data_replicas_required: 1
encoded_extent_max: 64.0 KiB
metadata_checksum: none crc32c [crc64] xxhash
data_checksum: none [crc32c] crc64 xxhash
compression: none
background_compression: zstd:15
str_hash: crc32c crc64 [siphash]
metadata_target: ssd
foreground_target: ssd
background_target: hdd
promote_target: ssd
erasure_code: 0
inodes_32bit: 1
shard_inode_numbers: 1
inodes_use_key_cache: 1
gc_reserve_percent: 5
gc_reserve_bytes: 0 B
root_reserve_percent: 0
wide_macs: 0
acl: 1
usrquota: 1
grpquota: 1
prjquota: 1
journal_flush_delay: 1000
journal_flush_disabled: 0
journal_reclaim_delay: 100
journal_transaction_names: 1
version_upgrade: [compatible] incompatible none
nocow: 0
members (size 232):
Device: 0
UUID: 62f3139c-4515-4e6a-9aa3-24f598263ece
Size: 18.1 TiB
Bucket size: 512 KiB
First bucket: 0
Buckets: 37881853
Last mount: Thu Jul 13 11:24:38 2023
State: rw
Label: hdd1 (1)
Data allowed: journal,btree,user
Has data: journal,btree,user,cached
Discard: 0
Freespace initialized: 1
Device: 1
UUID: 4c1c7eff-f1e9-44b8-bcac-186fb4aa2367
Size: 18.1 TiB
Bucket size: 1.00 MiB
First bucket: 0
Buckets: 18940926
Last mount: Thu Jul 13 11:24:38 2023
State: rw
Label: hdd2 (2)
Data allowed: journal,btree,user
Has data: journal,btree,user,cached
Discard: 0
Freespace initialized: 1
Device: 2
UUID: 145ea1b5-b6c6-4d13-bdee-1d482d29758f
Size: 233 GiB
Bucket size: 512 KiB
First bucket: 0
Buckets: 476948
Last mount: Thu Jul 13 11:24:38 2023
State: rw
Label: ssd1 (7)
Data allowed: journal,btree,user
Has data: journal,btree,user,cached
Discard: 0
Freespace initialized: 1
Device: 3
UUID: 933d8443-8d62-403a-9ef2-e0a3b3fe8c42
Size: 233 GiB
Bucket size: 512 KiB
First bucket: 0
Buckets: 476948
Last mount: Thu Jul 13 11:24:38 2023
State: ro
Label: ssd1 (7)
Data allowed: journal,btree,user
Has data: journal,btree,user,cached
Discard: 0
Freespace initialized: 1
sdd.ssd1 (device 2): sdd.ssd2 (device 3):
:facepalm:
OTOH after renaming to ssd.
via sysfs cache on HDD keeps growing, let's keep this open
Uuuurrh, you see that both ssd's are named ssd.ssd1?