zpool create should clear superblock
System information
Distribution Name | xubuntu Distribution Version | 21.04 Kernel Version | 5.11.0-37-generic Architecture | x86_64 OpenZFS Version | zfs-2.0.2-1ubuntu5.2
Describe the problem you're observing
i had a mdadm raid1 on two ssds. those ssd's are now a special vdev on my pool. i didnt use any partition table for mdadm and gave zfs the whole disk.
after reboot zfs can't import that pool because mdadm has restarted that array. after stopping the mdadm array i did zfs import. zfs scrub reports errors on one of those disks. repairing works
Describe how to reproduce the problem
create a md array on two disks. stop the array and use those as a special vdev (mirrored). reboot. you will see that mdadm restarts that array
please change zpool create/add that it clears md superblock when given a whole disk
zpool status (while scrubbing and repairing)
pool: dpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: scrub in progress since Thu Oct 7 12:08:54 2021
5.23T scanned at 2.23G/s, 1.04T issued at 456M/s, 5.23T total
352K repaired, 19.96% done, 02:40:29 to go
config:
NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST12000VN0008-2JH101_ZL002FES ONLINE 0 0 0
ata-TOSHIBA_MG07ACA12TE_81G0A01JF95G ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-TOSHIBA_MG07ACA12TE_81H0A00KF95G ONLINE 0 0 0
ata-TOSHIBA_MG07ACA12TE_71U0A2QCF95G ONLINE 0 0 0
special
mirror-2 ONLINE 0 0 0
ata-Samsung_SSD_860_EVO_500GB_S3Z2NB0K778062V ONLINE 0 0 0
ata-Samsung_SSD_860_PRO_512GB_S42YNX0N802183M ONLINE 3 0 6 (repairing)
errors: No known data errors
confirmed on proxmox (debian based) 6.4-14 x86_64 zfs 2.0.7-pve1 kernel 5.4.174-2-pve
furthermore the problem can manifest itself after a lot of time and reboots if mdadm was not installed at the beginning, but later
https://github.com/openzfs/zfs/issues/634
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.