ASSERT at cmd/zdb/zdb.c:468:iterate_through_spacemap_logs()
System information
Type | Version/Name Proxmox | 8.4.1 Kernel Version | 6.14.0-2-pve Architecture | x86_64 OpenZFS Version | zfs-2.2.7-pve2
Describe the problem you're observing
While analyzing permanent errors on my pool with zdb -bcsv rpool, the app crashed while checking the leaked spacemap logs.
Include any warning/errors/backtraces from the system logs
root@proxmox:~# zdb -bcsv rpool
Traversing all blocks to verify checksums and verify nothing leaked ...
loading concrete vdev 0, metaslab 115 of 116 ...
335G completed ( 40MB/s) estimated time remaining: 1hr 15min 42sec
409G completed ( 39MB/s) estimated time remaining: 0hr 46min 23sec zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, d137> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=15000679L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, d168> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=15014149L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
409G completed ( 39MB/s) estimated time remaining: 0hr 46min 21sec zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 10efb> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14653780L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 11253> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14239443L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 11271> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14248650L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 113e5> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14293746L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
409G completed ( 39MB/s) estimated time remaining: 0hr 46min 06sec zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 29dfe> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14767719L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
zdb_blkptr_cb: Got error 52 reading <7497, 1, 0, 29dd7> DVA[0]=<0:7c73841600:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=14757245L/14239443P fill=1 cksum=0167682fedd7309e:7743504f50fb3a0f:ac9f436f673e7c75:f9181f9006d0a3e4 -- skipping
416G completed ( 39MB/s) estimated time remaining: 0hr 43min 11sec zdb_blkptr_cb: Got error 52 reading <7505, 1, 0, 1aa40> DVA[0]=<0:8618878c00:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=15014292L/14073046P fill=1 cksum=056b638d94ce1115:6a2b90d313d08395:d2c4ea665e733d2b:6a15f6ed7343cd12 -- skipping
420G completed ( 39MB/s) estimated time remaining: 0hr 41min 31sec zdb_blkptr_cb: Got error 52 reading <7505, 1, 0, 1c8ec0> DVA[0]=<0:8618878c00:200> [L0 zvol object] sha256 lz4 unencrypted LE contiguous dedup single size=4000L/200P birth=15014238L/14073046P fill=1 cksum=056b638d94ce1115:6a2b90d313d08395:d2c4ea665e733d2b:6a15f6ed7343cd12 -- skipping
526G completed ( 45MB/s) estimated time remaining: 106806676hr 07min 08sec
Error counts:
errno count
52 10
leaked space: vdev 0, offset 0x620bca400, size 1024
leaked space: vdev 0, offset 0x9e2994a00, size 1024
leaked space: vdev 0, offset 0x9e2995600, size 1024
leaked space: vdev 0, offset 0x9e29f2a00, size 11264
leaked space: vdev 0, offset 0x1c12592a00, size 6144
leaked space: vdev 0, offset 0x1c936d0400, size 2048
leaked space: vdev 0, offset 0x1cbe1bf600, size 3072
leaked space: vdev 0, offset 0x1cbe1c6e00, size 2048
leaked space: vdev 0, offset 0x1cbe2f4800, size 1024
leaked space: vdev 0, offset 0x1cbe2f5400, size 1024
leaked space: vdev 0, offset 0x41f4035a00, size 512
leaked space: vdev 0, offset 0x4332517000, size 512
leaked space: vdev 0, offset 0x4442e5a200, size 1024
leaked space: vdev 0, offset 0x4442f0f600, size 1024
leaked space: vdev 0, offset 0x4565781400, size 512
leaked space: vdev 0, offset 0x4565783400, size 512
leaked space: vdev 0, offset 0x4565890600, size 2048
leaked space: vdev 0, offset 0x4565891400, size 512
leaked space: vdev 0, offset 0x7c296a3800, size 45056
leaked space: vdev 0, offset 0x7c296c5800, size 45056
leaked space: vdev 0, offset 0x7c296f9a00, size 20480
leaked space: vdev 0, offset 0x7d5df03a00, size 47104
leaked space: vdev 0, offset 0x7d62df2600, size 49152
leaked space: vdev 0, offset 0x7d71fcba00, size 1024
leaked space: vdev 0, offset 0x7d71fcfe00, size 1024
leaked space: vdev 0, offset 0x7d720ee600, size 49152
leaked space: vdev 0, offset 0x7d721ffe00, size 25600
leaked space: vdev 0, offset 0x7d72377800, size 54272
leaked space: vdev 0, offset 0x7d7238d400, size 1024
leaked space: vdev 0, offset 0x7dc0b25e00, size 512
leaked space: vdev 0, offset 0x7dc1fa5600, size 46592
leaked space: vdev 0, offset 0x7dc46c4800, size 46592
leaked space: vdev 0, offset 0x7dc470f200, size 24576
leaked space: vdev 0, offset 0x7dc6b97400, size 512
leaked space: vdev 0, offset 0x7dc89c4a00, size 176640
leaked space: vdev 0, offset 0x8000094000, size 32768
leaked space: vdev 0, offset 0x800040c000, size 16384
leaked space: vdev 0, offset 0x80065d7000, size 20480
leaked space: vdev 0, offset 0x8019079000, size 8192
leaked space: vdev 0, offset 0x806babb000, size 32768
leaked space: vdev 0, offset 0x8183460000, size 53248
leaked space: vdev 0, offset 0x81854a6000, size 4096
leaked space: vdev 0, offset 0x8411110400, size 6144
leaked space: vdev 0, offset 0x84200c5200, size 3072
leaked space: vdev 0, offset 0x84200d6600, size 2048
leaked space: vdev 0, offset 0x84200e8200, size 1024
leaked space: vdev 0, offset 0x84200e8e00, size 1024
leaked space: vdev 0, offset 0x8450a51600, size 1024
leaked space: vdev 0, offset 0x8450a55c00, size 1024
leaked space: vdev 0, offset 0xae60900800, size 3072
leaked space: vdev 0, offset 0xae60907800, size 1024
leaked space: vdev 0, offset 0xae631eea00, size 1024
leaked space: vdev 0, offset 0xaf2f2d3a00, size 6656
leaked space: vdev 0, offset 0xaf2f2dd000, size 512
leaked space: vdev 0, offset 0xaf2f2df000, size 512
leaked space: vdev 0, offset 0xaf31884600, size 2048
leaked space: vdev 0, offset 0xaf31ac2800, size 512
leaked space: vdev 0, offset 0xb0b2d9fc00, size 2048
leaked space: vdev 0, offset 0xb0b2dc5e00, size 2560
leaked space: vdev 0, offset 0xb0b30f0a00, size 2048
leaked space: vdev 0, offset 0xb0b3171400, size 2048
leaked space: vdev 0, offset 0xb0b3285400, size 1536
leaked space: vdev 0, offset 0xb0b3342600, size 2048
leaked space: vdev 0, offset 0xb0b33a1000, size 2048
leaked space: vdev 0, offset 0xb0b33f9800, size 2048
leaked space: vdev 0, offset 0xb0b3508c00, size 8192
leaked space: vdev 0, offset 0xb0b35bc200, size 2048
leaked space: vdev 0, offset 0xb0b37d2c00, size 4096
leaked space: vdev 0, offset 0xb0b37d4400, size 4096
leaked space: vdev 0, offset 0xb0b37d6400, size 4096
leaked space: vdev 0, offset 0xb0b3839200, size 1536
leaked space: vdev 0, offset 0xb0b385b000, size 2048
leaked space: vdev 0, offset 0xb0b3899a00, size 2048
leaked space: vdev 0, offset 0xb0b38b5a00, size 2048
leaked space: vdev 0, offset 0xb0b38f4e00, size 1536
leaked space: vdev 0, offset 0xb0b3b88800, size 2048
leaked space: vdev 0, offset 0xb0b3e8b000, size 1536
leaked space: vdev 0, offset 0xb0b4469600, size 4096
leaked space: vdev 0, offset 0xb0b4539800, size 1536
leaked space: vdev 0, offset 0xb0b461ce00, size 1024
leaked space: vdev 0, offset 0xb0b461da00, size 1024
leaked space: vdev 0, offset 0xb0b472d400, size 2560
leaked space: vdev 0, offset 0xb0b472fa00, size 11264
leaked space: vdev 0, offset 0xb0b48b1600, size 1536
leaked space: vdev 0, offset 0xb0b4db3400, size 1536
leaked space: vdev 0, offset 0xb0b5447200, size 1536
leaked space: vdev 0, offset 0xb0b5448400, size 20480
leaked space: vdev 0, offset 0xb0b5464a00, size 3584
leaked space: vdev 0, offset 0xb0b5472600, size 7680
leaked space: vdev 0, offset 0xb0b5527000, size 1536
leaked space: vdev 0, offset 0xb0b5e54600, size 7168
leaked space: vdev 0, offset 0xb0b6297a00, size 3072
leaked space: vdev 0, offset 0xb0b6299200, size 4096
leaked space: vdev 0, offset 0xb0b62d1000, size 1536
leaked space: vdev 0, offset 0xb0b67afc00, size 2560
leaked space: vdev 0, offset 0xb0b67b9c00, size 8704
leaked space: vdev 0, offset 0xb0b6f0bc00, size 1536
leaked space: vdev 0, offset 0xb0b6f6cc00, size 3584
leaked space: vdev 0, offset 0xb0b741b800, size 2560
leaked space: vdev 0, offset 0xb0b7559800, size 7168
leaked space: vdev 0, offset 0xb0b75dde00, size 6144
leaked space: vdev 0, offset 0xb0b9423400, size 24064
leaked space: vdev 0, offset 0xb0b960de00, size 1024
leaked space: vdev 0, offset 0xbc7c74aa00, size 512
leaked space: vdev 0, offset 0xc217740600, size 47104
leaked space: vdev 0, offset 0xc217771e00, size 45056
leaked space: vdev 0, offset 0xc217788e00, size 45056
leaked space: vdev 0, offset 0xc2177f7000, size 20480
leaked space: vdev 0, offset 0xc283de1600, size 512
leaked space: vdev 0, offset 0xc284c17400, size 46592
leaked space: vdev 0, offset 0xc286c38600, size 46592
leaked space: vdev 0, offset 0xc286cba600, size 20992
leaked space: vdev 0, offset 0xc28f722a00, size 176640
leaked space: vdev 0, offset 0xc3a0eff800, size 1024
leaked space: vdev 0, offset 0xc3a0f03000, size 1024
leaked space: vdev 0, offset 0xc3a0f7a200, size 44032
leaked space: vdev 0, offset 0xc3a1059600, size 25600
leaked space: vdev 0, offset 0xc3a1160e00, size 41984
leaked space: vdev 0, offset 0xc3a1178600, size 9216
leaked space: vdev 0, offset 0xc3a117ae00, size 1024
leaked space: vdev 0, offset 0xc3a6eee400, size 49152
ASSERT at cmd/zdb/zdb.c:468:iterate_through_spacemap_logs()
space_map_iterate(sm, space_map_length(sm), iterate_through_spacemap_logs_cb, &uic) == 0 (0x34 == 0)
PID: 703025 COMM: zdb
TID: 703025 NAME: zdb
Call trace:
/lib/x86_64-linux-gnu/libzpool.so.5(libspl_assertf+0x157) [0x7aa254761777]
zdb(+0x11920) [0x62ee283af920]
zdb(+0x1aaaa) [0x62ee283b8aaa]
zdb(+0x2150d) [0x62ee283bf50d]
zdb(+0xb1d4) [0x62ee283a91d4]
/lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x7aa253e0b24a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7aa253e0b305]
zdb(+0xc7e1) [0x62ee283aa7e1]
zdb(+0x13e03)[0x62ee283b1e03]
/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7aa253e20050]
/lib/x86_64-linux-gnu/libc.so.6(+0x8aeec)[0x7aa253e6eeec]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x12)[0x7aa253e1ffb2]
/lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7aa253e0a472]
/lib/x86_64-linux-gnu/libzpool.so.5(+0x57ad7)[0x7aa2544bcad7]
zdb(+0x11920)[0x62ee283af920]
zdb(+0x1aaaa)[0x62ee283b8aaa]
zdb(+0x2150d)[0x62ee283bf50d]
zdb(+0xb1d4)[0x62ee283a91d4]
/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7aa253e0b24a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7aa253e0b305]
zdb(+0xc7e1)[0x62ee283aa7e1]
Aborted
This means you have a checksum failure. This is odd given that there should be at least two copies of the metadata and I would have expected the code to read the alternate copy. If both copies are corrupt, which this suggests, then this pool is severely damaged and needs to be rebuilt.
This means you have a checksum failure. This is odd given that there should be at least two copies of the metadata and I would have expected the code to read the alternate copy. If both copies are corrupt, which this suggests, then this pool is severely damaged and needs to be rebuilt.
I was initially affected by this bug: https://github.com/openzfs/zfs/issues/15646
I migrated my RAID1 pool to LUKS, one disk after another. Unfortunately, I did not realize that the first migrated disk was consistently getting checksum errors: I did see some, but scrubbed the pool once and after confirming all issues were fixed (for the moment), I carried out with converting the other disk to LUKS, not expecting to see them creeping again. by the time second disk finished resilvering, the first disk was showing a lot of checksum errors again, so at the end of the day, both disks were showing errors and the second disk was actually resilvered with first disks' checksum errors in place.
The error numbers were steadily increasing and at some point the pool was reporting 1.6M of errors, with no problems in smartctl or in dmesg, so it was obvious something was amiss with LUKS and ZFS, at which point I found the https://github.com/openzfs/zfs/issues/15646 issue.
So I rebooted into the opt-in 6.14 kernel in Proxmox with ZFS 2.2.7 from the stable 6.8.12 — which actually has the same ZFS 2.2.7 in it, but I assumed it was older. I also enabled the zfs.zfs_vdev_disk_classic=0 module option. A tthis point the number of errors dropped to couple hundred only. I restored some all the affected datasets from the backups plus scrubbed several times, but the metadata error remains.
Additionally, I have an odd problem with one of the datasets I restore from backup — the permanent error just keeps coming back:
rpool/data/vm-109-disk-0:<0x1>
The backup was taken days before the migration.
Moreover, despite repeated scrub returning without issues, the checksum errors slowly but surely increase, about 10 errors a day.
Additionally, since yesterday, I took snapshots of the pool to create a backup and, somewhat unsurprisingly, the VM 109 showed up with errors as well:
rpool/data/vm-109-disk-0@fullbackup:<0x1>
However most confusingly, even if I restore the VM 109 dataset from scratch, the errors re-appear for that very dataset, again. I tried 3 or 4 times, to no avail. This has not been the case with any other dataset that I restored, though, they all seem fine. Which makes me wonder if the damaged metadata relates specifically to the data this dataset uses.
This is odd given that there should be at least two copies of the metadata and I would have expected the code to read the alternate copy. If both copies are corrupt, which this suggests, then this pool is severely damaged and needs to be rebuilt.
Which brings me to what you say here — until I actually figure out why the checksum errors start creeping in, I cannot be sure if the metada data is actually corrupt, especially considering there should be two copies as you say?
Please do note that I filed this issue to report the zdb crash and was going to report on the LUKS checksum problem in a separate issue today, but happy to continue the discussion here.
I did do some extra investigation, the zdb reported checksum issues in two metadata objects: 7497 and 7505:
root@proxmox:~# zdb -dddd rpool 7497
Dataset mos [META], ID 0, cr_txg 4, 7.82G, 444 objects, rootbp DVA[0]=<0:586edb600:200> DVA[1]=<0:245bc50000:200> DVA[2]=<0:5625039200:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique triple size=1000L/200P birth=15078113L/15078113P fill=444 cksum=0000000970b3606c:000003d78f367b58:0000cc16dbf695ff:001cc9b1821cf3f1
Object lvl iblk dblk dsize dnsize lsize %full type
7497 1 128K 512 0 512 512 0.00 DSL dataset
320 bonus DSL dataset
dnode flags:
dnode maxblkid: 0
dir_obj = 32960
prev_snap_obj = 48
prev_snap_txg = 1
next_snap_obj = 32963
snapnames_zapobj = 0
num_children = 1
userrefs_obj = 7107
creation_time = Tue May 20 12:57:26 2025
creation_txg = 15075367
deadlist_obj = 32965
used_bytes = 1.81G
compressed_bytes = 1.79G
uncompressed_bytes = 5.58G
unique = 6.22M
fsid_guid = 34829607274247986
guid = 11380184239603715412
flags = 4
next_clones_obj = 0
props_obj = 0
bp = DVA[0]=<0:af2c3bfe00:200> DVA[1]=<0:84e1ccf000:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=15075366L/15075366P fill=2 cksum=0000000d521dbf6d:00000530510fd635:00010a377b008705:0024931073ef45e8
root@proxmox:~# zdb -dddd rpool 7505
Dataset mos [META], ID 0, cr_txg 4, 7.82G, 443 objects, rootbp DVA[0]=<0:5878dca00:200> DVA[1]=<0:245db47000:200> DVA[2]=<0:562c176e00:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique triple size=1000L/200P birth=15078116L/15078116P fill=443 cksum=000000097bf85c0a:000003db8b0c966f:0000ccc9e15c69ab:001cde62656966e4
Object lvl iblk dblk dsize dnsize lsize %full type
7505 1 128K 512 0 512 512 0.00 DSL dataset
320 bonus DSL dataset
dnode flags:
dnode maxblkid: 0
dir_obj = 25543
prev_snap_obj = 48
prev_snap_txg = 1
next_snap_obj = 25546
snapnames_zapobj = 0
num_children = 1
userrefs_obj = 7111
creation_time = Tue May 20 12:57:26 2025
creation_txg = 15075367
deadlist_obj = 25548
used_bytes = 6.94G
compressed_bytes = 6.91G
uncompressed_bytes = 10.5G
unique = 0
fsid_guid = 43098864069098889
guid = 10846560753967096535
flags = 4
next_clones_obj = 0
props_obj = 0
bp = DVA[0]=<0:b600117400:200> DVA[1]=<0:1a006fb600:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=15069264L/15069264P fill=2 cksum=0000000d1a975983:00000507481d66e9:0000fe2feb00eb71:00226ad89d1b275f
With LLM help I deducted they contain information on dataset or snapshots — which could maybe be causing that VM 109 dataset getting affected somehow? Other than that and a slowly increasing checksum error count, the system appears completely stable.
Oh, one more thing, which I learned from the aforementioned bug that might be of importance: LUKS sector size is 512 and ZFS ashift=9. Discard is enabled on both.
One day later, and the checksum error is at 268K:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme1n1p3crypt ONLINE 0 0 268K
nvme0n1p3crypt ONLINE 0 0 268K