No way to force a rollback (using -T option etc.) on a pool which is OK.
System information
| Type | Version/Name |
|---|---|
| Distribution Name | Ubuntu |
| Distribution Version | 18.04.1 LTS |
| Linux Kernel | 4.15.0 |
| Architecture | x86_64 |
| ZFS Version | 0.7.9 (and others) |
| SPL Version | 0.7.9 (and others) |
Describe the problem you're observing
Three days ago, I accidentally ran "zfs destroy ..." on a large filesystem on one of my pools (thereby destroying the only important filesystem in that pool.) I quickly exported the pool to prevent further writing to it, so probably very little (if any) data should be overwritten. I have since booted from a live-usb (Ubuntu) and compiled several different versions of zfs, but since there is nothing wrong with the pool per se it will not allow me to import it to a previous txg number (through the -T option, although I also specify the -F option).
Example:
root@ubuntu:~/source/zfs# cmd/zpool/zpool import -T 245332 -F -o readonly=on S_pool
cannot import 'S_pool': one or more devices is currently unavailable
(Obviously the pool imports with the following command:
cmd/zpool/zpool import -o readonly=on S_pool
, but since the data were destroy prior to the current txg, this is alas worthless to me...)
I also tried the patch described as #2452 (and elsewhere), but still got the same results, and think that this issue might be different since in my case the pool is OK, but not at the txg that I need...
Describe how to reproduce the problem
Create a pool, create two zfs sub-filesystems, destroy one of them, destroy or export the pool, import it back in to find a txg prior to "zfs-destroy". Export the pool again, and now try to import to that previous txg... No luck :-(
I've been struggling a lot now, but still think the data (or certainly the bulk of them) should be possible to retrieve if I was only able import the pool through using that previous txg as my current one. I would greatly appreciate any suggestion / updates that enables this option.
Best regards, Jon Ivar
I now also tried with the current master branch:
root@ubuntu:~/source/zfs# cat /sys/module/zfs/version
0.7.0-1512_g802715b74
Results were the same as before:
root@ubuntu:~/source/zfs# bin/zpool import -T 245332 -F -o readonly=on S_pool
cannot import 'S_pool': one or more devices is currently unavailable
(And as before, if I omit -T 245332 -F it imports just fine.)
I have now tried 0.7.5, 0.7.6, 0.7.9, and the current master (0.7.0-1512_g802715b74), but no luck...
Doh! Another mistake... - I guess all is now irrevocably lost?
By accident (and probably through all of my unsuccessful attempts) it seems I have (perhaps on a couple of occasions) happened to have the pool mounted without the readonly-option set, and now the oldest uberblock I can find (through zdb -ul -e S_pool), appears to be from a couple of minutes after I had already destroyed the filesystem from the pool... 👎
I assume that means "all hope is lost" (at least for a simple layman as myself)? - I guess that even if I could make the -T option work, I would never be able to roll back or access anything that was destroyed prior to the date of the first uberblock I can find through zdb -ul -e S_pool?
If anyone "Senior" could confirm this assumption (that I definitely need an older uberblock, and that zdb -ul -e S_pool would reveal all "available" uberblocks), I would appreciate such a feedback as well. (Since given such a confirmation, I would move on / start to spend my time trying to reassemble whatever I can from scratch, rather than futilely wasting time on trying to salvage data from this pool.)
Anyway; keep ut the good work and I love your great effort! (Although I obviously wish the -T option had worked, and I think my experience illustrates a case where that would have been very useful!)
@jonryk oh no! Yes, I'm sorry to say that unless you made a copy of the uberblocks somewhere recovering this pool would be very challenging. The uberblocks contain all the possible root block pointers for the pool and without them there's no reasonable way to rollback.
As for -T I'm going to leave this issue open so we can investigate the behavior you observed. This option should work as a possible last resort and we should add some basic test coverage to verify that.
@behlendorf - Thanks a for your reply! Although you confirmed my suspicion that it is now too late to save my data, I appreciate that you'll keep this issue open and try to make this work for others who might end up in a similar situation - Thanks for your great work!
By the way / out of curiosity: Is there a command (or a fairly straight forward way) to export/import an uberblock, in order to quickly make/restore a copy of the uberblocks, such as you mentioned? (That would perhaps have been a useful alternative for a case when a mistake such as mine was made?)
@jonryk while not widely known, you can use the -x dumpdir option with zdb(8) to request that it make a copy of every block read when importing the pool. This will include the uberblocks as well as any other critical pool data. This is similar in intent to e2image(8) utility you may be familiar with. This is mainly for analysis purposes.
I too could really use this functionality - not working for me atm, see here:
add mdbzfs (explore and undelete files from offline pool) - needed feature for brown paper bag "rm" moments #9313
I'm running into this issue as well... My pool is fine, I just want to use -T to rollback and try to save a file which was deleted by accident by proxmox when restoring a VM. After I realized what proxmox did, I immediately stopped everyting, exported the pool, and I'm unable to import it with the -T for an earlier txg... keep getting the "one or more devices is currently unavailable".
If I do "zpool import" it imports just fine!
I'm going to give it a try on the hack code from #2452, and edit the DKMS zfs module and re-installing it on a clean debian 9, to see if I can rollback.
But it was a scary surprise to realize I can't rollback, specially considering #2452 is from 2014!
I'm running into this issue as well... My pool is fine, I just want to use -T to rollback and try to save a file which was deleted by accident by proxmox when restoring a VM. After I realized what proxmox did, I immediately stopped everyting, exported the pool, and I'm unable to import it with the -T for an earlier txg... keep getting the "one or more devices is currently unavailable".
If I do "zpool import" it imports just fine!
I'm going to give it a try on the hack code from #2452, and edit the DKMS zfs module and re-installing it on a clean debian 9, to see if I can rollback.
But it was a scary surprise to realize I can't rollback, specially considering #2452 is from 2014!
Having the same problem. Accidentally deleted all of my data by reverting to a very early snapshot. Did you manage to make -T work?
Edit: It works, but only for a few transactions back (in my case ~100), depending if the uberblock is there or not. So, I had to restore from an older backup instead.
I'm facing a similar situation, having accidentally overwritten recursive datasets using send/receive with an erroneous receive name. I immediately exported the pool and was then able to identify a txg preceding this unfortunate action (a pool scrub) from which I identified a remaining uberblock, but was unable to rewind to it using the zpool import -T option, as follows:
# zdb -hhe backpool
...
2020-06-14.00:24:08 zpool scrub backpool
history command: 'zpool scrub backpool'
history zone: 'linux'
history who: 0
history time: 1592087048
history hostname: 'bckupsys'
unrecognized record:
history internal str: 'errors=0'
internal_name: 'scan done'
history txg: 2620580
history time: 1592115323
history hostname: 'bckupsys'
...
# zdb -l -u -e backpool
...
Uberblock[4]
magic = 0000000000bab10c
version = 5000
txg = 2620580
guid_sum = 11392655302192565697
timestamp = 1592115323 UTC = Sun Jun 14 08:15:23 2020
mmp_magic = 00000000a11cea11
mmp_delay = 0
labels = 0 1 2 3
# zpool import -N -o readonly=on -T 2620580 backpool
cannot import 'backpool': one or more devices is currently unavailable
...
@sotiris-bos
It works, but only for a few transactions back
Could you please detail the command that you used to make it work, and did you use any special trick to achieve it? I think this would be of great help to many!
Thanks a lot.
FYI, this worked for me in a simple test. I'm not sure what's different about other configurations that makes it sometimes not work.
$sudo zdb -lu /dev/sdc1 | grep "txg"
...
txg = 18128
...
$ sudo zpool import -N -o readonly=on -T 18128 test
Note that in general there's no guarantee that txg's that are more than 4 back will work, because some of the blocks may have been overwritten. But I'm not sure if that would cause the one or more devices is currently unavailable error, or if that's due to a bug.
@tacticz
I believe the "one or more devices is unavailable error" is because the uberblock is invalid/overwritten/unavailable.
Your zpool import command is correct, at least that is what I used as far as I remember, but just to be sure you may want to add "-d /dev/disk/by-id".
Basically, you need to find what the last good uberblock/txg is (if there is one before your problem occured). Maybe try zpool history -il to help you with the timestamps, but mainly you need to try all available txgs to possibly find a useable one.
I wish you the best!
Thank you @harens and @sotiris-bos for replying!
Unfortunately for me, trying all identified txgs (starting from the oldest "interesting" one and progressing to the last listed one) didn't succeed. The system keeps outputting the dreaded cannot import 'backpool: one or more devices is currently unavailable message whatever value is given to the -T option (even the latest one).
As suggested I tried adding the -d /dev/disk/by-id option, and even tried adding the -FX options, to no avail :-(
I think I'll have to give up all hopes of being able to recover lost data since I cannot hold back much longer the use of this backup pool, thanks anyways guys!
First and foremost, a big thanks to Jeff Bonwick, @behlendorf , @ahrens , and all of the other OpenZFS developers for the countless hours of effort on developing the greatest file system / volume manager in open source history!
Unfortunately, this bug bit us over the past weekend while upgrading Debian 10 to 11 on an active/passive ZFS HA system based on the ClusterLabs Pacemaker OCF ZFS resource heartbeat agent (https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/ZFS), Linux multipath device mapper, and SAS SSDs in a SuperMicro SBB enclosure. The system had been functioning just fine for about 3 years on Debian 10 with numerous successful HA failovers (e.g. zpool exports and imports) between hosts for maintenance. Sadly, somehow the zpool got borked in the O/S upgrade process and we encountered the dreaded "cannot import 'zpool': I/O error" and "cannot import 'zpool': one or more devices is currently unavailable" errors, despite all physical SAS disks being online and available.
Furthermore, I can confirm all of the zpool data was intact as we were thankfully able to recover everything using the excellent (yet expensive) UFS Explorer Pro software from SysDev Laboratories (https://www.ufsexplorer.com/ufs-explorer-professional-recovery.php).
Just wanted to provide as much info as I can offer in the limited amount of time available to collect it for this zpool, which was rendered un-importable via any documented method I could find. If there is any additional info (from zdb?) I can provide that would be helpful to resolve this issue please let me know ASAP.
root@svr-lf-nas1:/tmp# lsb_release -d Description: Debian GNU/Linux 11 (bullseye)
root@svr-lf-nas1:/tmp# uname -a Linux svr-lf-nas1 5.10.0-10-amd64 #1 SMP Debian 5.10.84-1 (2021-12-08) x86_64 GNU/Linux
root@svr-lf-nas1:/tmp# modinfo zfs filename: /lib/modules/5.10.0-10-amd64/updates/dkms/zfs.ko version: 2.1.2-1~bpo11+1 license: CDDL author: OpenZFS description: ZFS alias: devname:zfs alias: char-major-10-249 srcversion: CBA22A38FC21EF8C7379B5B depends: spl,znvpair,icp,zlua,zzstd,zunicode,zcommon,zavl retpoline: Y name: zfs vermagic: 5.10.0-10-amd64 SMP mod_unload modversions
root@svr-lf-nas1:/tmp# modinfo spl filename: /lib/modules/5.10.0-10-amd64/updates/dkms/spl.ko version: 2.1.2-1~bpo11+1 license: GPL author: OpenZFS description: Solaris Porting Layer srcversion: AB59F195FAB464801E51920 depends: retpoline: Y name: spl vermagic: 5.10.0-10-amd64 SMP mod_unload modversions
root@svr-lf-nas1:/tmp# lsscsi [0:0:0:0] disk SEAGATE ST200FM0053 0007 /dev/sdb [0:0:1:0] disk SEAGATE ST200FM0053 0007 /dev/sdc [0:0:2:0] disk SEAGATE ST200FM0053 0007 /dev/sdd [0:0:3:0] disk SEAGATE ST200FM0053 0007 /dev/sde [0:0:4:0] disk SEAGATE ST200FM0053 0006 /dev/sdf [0:0:5:0] disk SEAGATE ST200FM0053 0007 /dev/sdg [0:0:6:0] disk SEAGATE ST200FM0053 0007 /dev/sdh [0:0:7:0] disk SEAGATE ST200FM0053 0007 /dev/sdi [0:0:8:0] disk SEAGATE ST200FM0053 0007 /dev/sdj [0:0:9:0] disk SEAGATE ST200FM0053 0007 /dev/sdk [0:0:10:0] disk SEAGATE ST200FM0053 0007 /dev/sdl [0:0:11:0] disk SEAGATE ST200FM0053 0007 /dev/sdm [0:0:12:0] disk HITACHI HUC109060CSS600 A5B0 /dev/sdn [0:0:13:0] disk HITACHI HUC109060CSS600 A5B0 /dev/sdo [0:0:14:0] enclosu SMCDRS2U SAS3x40 0701 - [1:0:0:0] disk ATA SATADOM-SL 3ME3 25 /dev/sda [2:0:0:0] disk ATA SATADOM-SL 3ME3 25 /dev/sdp
root@svr-lf-nas1:/tmp# multipath -ll
35000c5003013070b dm-3 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:11:0 sdm 8:192 active ready running
35000c5003013049f dm-6 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:2:0 sdd 8:48 active ready running
35000c50030130627 dm-13 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:9:0 sdk 8:160 active ready running
35000c5003011de37 dm-1 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:1:0 sdc 8:32 active ready running
35000c5003013047f dm-7 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:3:0 sde 8:64 active ready running
35000cca0707e9978 dm-4 HITACHI,HUC109060CSS600
size=559G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:12:0 sdn 8:208 active ready running
35000c50030130607 dm-9 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:5:0 sdg 8:96 active ready running
35000c5003011dfdf dm-10 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:6:0 sdh 8:112 active ready running
35000c500301304af dm-11 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:7:0 sdi 8:128 active ready running
35000c500301306b7 dm-12 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:8:0 sdj 8:144 active ready running
35000c5003013046f dm-8 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:4:0 sdf 8:80 active ready running
35000cca0707f0824 dm-5 HITACHI,HUC109060CSS600
size=559G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:13:0 sdo 8:224 active ready running
35000c50030116a17 dm-2 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:10:0 sdl 8:176 active ready running
35000c500301302b3 dm-0 SEAGATE,ST200FM0053
size=186G features='0' hwhandler='0' wp=rw
-+- policy='service-time 0' prio=1 status=active - 0:0:0:0 sdb 8:16 active ready running
root@svr-lf-nas1:/tmp# ls -l /dev/mapper/ total 0 lrwxrwxrwx 1 root root 7 Jan 17 03:07 35000c50030116a17 -> ../dm-2 lrwxrwxrwx 1 root root 7 Jan 17 03:07 35000c5003011de37 -> ../dm-1 lrwxrwxrwx 1 root root 8 Jan 17 03:07 35000c5003011dfdf -> ../dm-10 lrwxrwxrwx 1 root root 7 Jan 17 03:07 35000c500301302b3 -> ../dm-0 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000c5003013046f -> ../dm-8 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000c5003013047f -> ../dm-7 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000c5003013049f -> ../dm-6 lrwxrwxrwx 1 root root 8 Jan 17 03:07 35000c500301304af -> ../dm-11 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000c50030130607 -> ../dm-9 lrwxrwxrwx 1 root root 8 Jan 17 03:07 35000c50030130627 -> ../dm-13 lrwxrwxrwx 1 root root 8 Jan 17 03:07 35000c500301306b7 -> ../dm-12 lrwxrwxrwx 1 root root 7 Jan 17 03:07 35000c5003013070b -> ../dm-3 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000cca0707e9978 -> ../dm-4 lrwxrwxrwx 1 root root 7 Jan 17 03:08 35000cca0707f0824 -> ../dm-5 crw------- 1 root root 10, 236 Jan 17 02:41 control
root@svr-lf-nas1:/tmp# zpool import -d /dev/mapper pool: zpool1 id: 12640424694560194929 state: ONLINE status: Some supported features are not enabled on the pool. (Note that they may be intentionally disabled if the 'compatibility' property is set.) action: The pool can be imported using its name or numeric identifier, though some features will not be available without an explicit 'zpool upgrade'. config:
zpool1 ONLINE
raidz1-0 ONLINE
35000c50030116a17 ONLINE
35000c5003011de37 ONLINE
35000c5003011dfdf ONLINE
35000c500301302b3 ONLINE
35000c5003013046f ONLINE
35000c5003013047f ONLINE
35000c5003013049f ONLINE
35000c500301304af ONLINE
35000c50030130607 ONLINE
35000c5003013070b ONLINE
35000c500301306b7 ONLINE
spares
35000c50030130627
root@svr-lf-nas1:/tmp# zpool import -d /dev/mapper zpool1 cannot import 'zpool1': I/O error Destroy and re-create the pool from a backup source.
root@svr-lf-nas1:/tmp# zpool import -fF -o readonly=on -d /dev/mapper zpool1 cannot import 'zpool1': one or more devices is currently unavailable
root@svr-lf-nas1:/tmp# zpool import -fFX -o readonly=on -d /dev/mapper zpool1 cannot import 'zpool1': one or more devices is currently unavailable
root@svr-lf-nas1:/tmp# zpool import -T 11174160 -o readonly=on -d /dev/mapper zpool1 cannot import 'zpool1': one or more devices is currently unavailable
Tried every Uberblock txg however all resulted in the same "one or more devices is currently unavailable" error
root@svr-lf-nas1:/tmp# zdb -eul -p /dev/mapper/ zpool1
LABEL 0
version: 5000
name: 'zpool1'
state: 1
txg: 11174160
pool_guid: 12640424694560194929
errata: 0
hostid: 216716682
hostname: 'svr-lf-nas1'
top_guid: 8901512655823202969
guid: 16089382942853204291
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 8901512655823202969
nparity: 1
metaslab_array: 141
metaslab_shift: 34
ashift: 12
asize: 2200491786240
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 10336981949221417869
path: '/dev/mapper/35000c50030116a17'
devid: 'dm-uuid-mpath-35000c50030116a17'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot10'
whole_disk: 0
DTL: 288
create_txg: 4
expansion_time: 1642293322
children[1]:
type: 'disk'
id: 1
guid: 13911944123844279492
path: '/dev/mapper/35000c5003011de37'
devid: 'dm-uuid-mpath-35000c5003011de37'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot01'
whole_disk: 0
DTL: 287
create_txg: 4
expansion_time: 1642293323
children[2]:
type: 'disk'
id: 2
guid: 5890744052956618075
path: '/dev/mapper/35000c5003011dfdf'
devid: 'dm-uuid-mpath-35000c5003011dfdf'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot06'
whole_disk: 0
DTL: 286
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 12448756201857049597
path: '/dev/mapper/35000c500301302b3'
devid: 'dm-uuid-mpath-35000c500301302b3'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot00'
whole_disk: 0
DTL: 285
create_txg: 4
expansion_time: 1642293321
children[4]:
type: 'disk'
id: 4
guid: 14638406971355128302
path: '/dev/mapper/35000c5003013046f'
devid: 'dm-uuid-mpath-35000c5003013046f'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot04'
whole_disk: 0
DTL: 284
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 9081463650821448567
path: '/dev/mapper/35000c5003013047f'
devid: 'dm-uuid-mpath-35000c5003013047f'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot03'
whole_disk: 0
DTL: 283
create_txg: 4
expansion_time: 1642293324
children[6]:
type: 'disk'
id: 6
guid: 3227123972620444418
path: '/dev/mapper/35000c5003013049f'
devid: 'dm-uuid-mpath-35000c5003013049f'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot02'
whole_disk: 0
DTL: 282
create_txg: 4
expansion_time: 1642293323
children[7]:
type: 'disk'
id: 7
guid: 15548328153275976958
path: '/dev/mapper/35000c500301304af'
devid: 'dm-uuid-mpath-35000c500301304af'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot07'
whole_disk: 0
DTL: 281
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 7079938041889737615
path: '/dev/mapper/35000c50030130607'
devid: 'dm-uuid-mpath-35000c50030130607'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot05'
whole_disk: 0
DTL: 280
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 7576123842782236648
path: '/dev/mapper/35000c5003013070b'
devid: 'dm-uuid-mpath-35000c5003013070b'
phys_path: '/dev/disk/by-uuid/1986617129129516959'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot11'
whole_disk: 0
DTL: 279
create_txg: 4
expansion_time: 1642293322
children[10]:
type: 'disk'
id: 10
guid: 16089382942853204291
path: '/dev/mapper/35000c500301306b7'
devid: 'dm-uuid-mpath-35000c500301306b7'
vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:14:0/Slot08'
whole_disk: 0
DTL: 278
create_txg: 4
expansion_time: 1642293322
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
labels = 0 1 2 3
Uberblock[0]
magic = 0000000000bab10c
version = 5000
txg = 11174144
guid_sum = 8243922737894078418
timestamp = 1642293366 UTC = Sat Jan 15 18:36:06 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[1]
magic = 0000000000bab10c
version = 5000
txg = 11174081
guid_sum = 8243922737894078418
timestamp = 1642293156 UTC = Sat Jan 15 18:32:36 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[2]
magic = 0000000000bab10c
version = 5000
txg = 11174146
guid_sum = 8243922737894078418
timestamp = 1642293376 UTC = Sat Jan 15 18:36:16 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[3]
magic = 0000000000bab10c
version = 5000
txg = 11174115
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[4]
magic = 0000000000bab10c
version = 5000
txg = 11174148
guid_sum = 8243922737894078418
timestamp = 1642293382 UTC = Sat Jan 15 18:36:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[5]
magic = 0000000000bab10c
version = 5000
txg = 11174117
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[6]
magic = 0000000000bab10c
version = 5000
txg = 11174118
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[7]
magic = 0000000000bab10c
version = 5000
txg = 11174151
guid_sum = 8243922737894078418
timestamp = 1642293382 UTC = Sat Jan 15 18:36:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[8]
magic = 0000000000bab10c
version = 5000
txg = 11174088
guid_sum = 8243922737894078418
timestamp = 1642293192 UTC = Sat Jan 15 18:33:12 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[9]
magic = 0000000000bab10c
version = 5000
txg = 11174121
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[10]
magic = 0000000000bab10c
version = 5000
txg = 11174122
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[11]
magic = 0000000000bab10c
version = 5000
txg = 11174059
guid_sum = 8243922737894078418
timestamp = 1642293043 UTC = Sat Jan 15 18:30:43 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[12]
magic = 0000000000bab10c
version = 5000
txg = 11174124
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[13]
magic = 0000000000bab10c
version = 5000
txg = 11174125
guid_sum = 8243922737894078418
timestamp = 1642293322 UTC = Sat Jan 15 18:35:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[14]
magic = 0000000000bab10c
version = 5000
txg = 11174094
guid_sum = 8243922737894078418
timestamp = 1642293222 UTC = Sat Jan 15 18:33:42 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[15]
magic = 0000000000bab10c
version = 5000
txg = 11174063
guid_sum = 8243922737894078418
timestamp = 1642293064 UTC = Sat Jan 15 18:31:04 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[16]
magic = 0000000000bab10c
version = 5000
txg = 11174160
guid_sum = 8243922737894078418
timestamp = 1642293382 UTC = Sat Jan 15 18:36:22 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[17]
magic = 0000000000bab10c
version = 5000
txg = 11174129
guid_sum = 8243922737894078418
timestamp = 1642293323 UTC = Sat Jan 15 18:35:23 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[18]
magic = 0000000000bab10c
version = 5000
txg = 11174098
guid_sum = 8243922737894078418
timestamp = 1642293243 UTC = Sat Jan 15 18:34:03 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[19]
magic = 0000000000bab10c
version = 5000
txg = 11174131
guid_sum = 8243922737894078418
timestamp = 1642293324 UTC = Sat Jan 15 18:35:24 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[20]
magic = 0000000000bab10c
version = 5000
txg = 11174132
guid_sum = 8243922737894078418
timestamp = 1642293324 UTC = Sat Jan 15 18:35:24 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[21]
magic = 0000000000bab10c
version = 5000
txg = 11174069
guid_sum = 8243922737894078418
timestamp = 1642293094 UTC = Sat Jan 15 18:31:34 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[22]
magic = 0000000000bab10c
version = 5000
txg = 11174134
guid_sum = 8243922737894078418
timestamp = 1642293324 UTC = Sat Jan 15 18:35:24 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[23]
magic = 0000000000bab10c
version = 5000
txg = 11174135
guid_sum = 8243922737894078418
timestamp = 1642293325 UTC = Sat Jan 15 18:35:25 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[24]
magic = 0000000000bab10c
version = 5000
txg = 11174104
guid_sum = 8243922737894078418
timestamp = 1642293274 UTC = Sat Jan 15 18:34:34 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[25]
magic = 0000000000bab10c
version = 5000
txg = 11174073
guid_sum = 8243922737894078418
timestamp = 1642293115 UTC = Sat Jan 15 18:31:55 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[26]
magic = 0000000000bab10c
version = 5000
txg = 11174138
guid_sum = 8243922737894078418
timestamp = 1642293335 UTC = Sat Jan 15 18:35:35 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[27]
magic = 0000000000bab10c
version = 5000
txg = 11174075
guid_sum = 8243922737894078418
timestamp = 1642293125 UTC = Sat Jan 15 18:32:05 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[28]
magic = 0000000000bab10c
version = 5000
txg = 11174140
guid_sum = 8243922737894078418
timestamp = 1642293345 UTC = Sat Jan 15 18:35:45 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[29]
magic = 0000000000bab10c
version = 5000
txg = 11174077
guid_sum = 8243922737894078418
timestamp = 1642293135 UTC = Sat Jan 15 18:32:15 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[30]
magic = 0000000000bab10c
version = 5000
txg = 11174142
guid_sum = 8243922737894078418
timestamp = 1642293356 UTC = Sat Jan 15 18:35:56 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
Uberblock[31]
magic = 0000000000bab10c
version = 5000
txg = 11174079
guid_sum = 8243922737894078418
timestamp = 1642293145 UTC = Sat Jan 15 18:32:25 2022
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
labels = 0 1 2 3
More zdb output data showing "zdb_blkptr_cb: Got error 52" Not sure what this means nor if it is the root cause for zpool import failing?
root@svr-lf-nas1:/tmp# zdb -eLs -bb -cc -p /dev/mapper/ zpool1
Traversing all blocks to verify checksums ...
582M completed ( 286MB/s) estimated time remaining: 1hr 02min 44sec zdb_blkptr_cb: Got error 52 reading <54, 0, -1, 0> DVA[0]=<0:e80014a000:2000> DVA[1]=<0:16400052000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique double size=1000L/1000P birth=11161875L/11161875P fill=11 cksum=f34a666bb:2a58eff7c79a:3ed59cc4710dfc:41cc54f21dabbeec -- skipping
180G completed ( 89MB/s) estimated time remaining: 2hr 46min 42sec zdb_blkptr_cb: Got error 52 reading <771, 386, 0, 5dd7> DVA[0]=<0:e800152000:a000> [L0 ZFS plain file] fletcher4 lz4 unencrypted LE contiguous unique single size=20000L/9000P birth=11161827L/11161827P fill=1 cksum=77e3c213c90:9c984e91f95706:8ef12dbcb1af09ff:979ff2dc07b78c2b -- skipping
180G completed ( 89MB/s) estimated time remaining: 2hr 46min 45sec zdb_blkptr_cb: Got error 52 reading <771, 386, 0, d00a> DVA[0]=<0:19000010000:10000> [L0 ZFS plain file] fletcher4 lz4 unencrypted LE contiguous unique single size=20000L/e000P birth=11161722L/11161722P fill=1 cksum=10fe07514e3b:1883ec22b570283:b9e35818a9acc52f:d1ddc5859afcfa4 -- skipping
182G completed ( 89MB/s) estimated time remaining: 2hr 46min 44sec zdb_blkptr_cb: Got error 52 reading <771, 386, 0, 2d618> DVA[0]=<0:e80015c000:1a000> [L0 ZFS plain file] fletcher4 lz4 unencrypted LE contiguous unique single size=20000L/16000P birth=11161827L/11161827P fill=1 cksum=233afecb8491:602fa8ebdd042d0:73ff674b0434d7df:ca507a8047fbe8ff -- skipping
1.03T completed ( 104MB/s) estimated time remaining: 0hr 00min 00sec
Error counts:
errno count
52 4
bp count: 11143764
ganged count: 0
bp logical: 1459731628032 avg: 130990
bp physical: 989592802304 avg: 88802 compression: 1.48
bp allocated: 1132903071744 avg: 101662 compression: 1.29
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 1131459035136 used: 51.86%
Embedded log class 1444413440 used: 8.41%
additional, non-pointer bps of type 0: 124
Dittoed blocks on same vdev: 54120
Blocks LSIZE PSIZE ASIZE avg comp %Total Type - - - - - - - unallocated 2 32K 8K 48K 24K 4.00 0.00 object directory 2 1K 1K 48K 24K 1.00 0.00 object array 2 32K 8K 48K 24K 4.00 0.00 packed nvlist - - - - - - - packed nvlist size 3.13K 401M 53.0M 208M 66.5K 7.57 0.02 bpobj - - - - - - - bpobj header - - - - - - - SPA space map header 7.31K 117M 106M 467M 63.9K 1.10 0.04 SPA space map 2 40K 40K 48K 24K 1.00 0.00 ZIL intent log 224 12.4M 896K 3.83M 17.5K 14.12 0.00 DMU dnode 18 72K 72K 296K 16.4K 1.00 0.00 DMU objset - - - - - - - DSL directory - - - - - - - DSL directory child map 4 3K 2K 48K 12K 1.50 0.00 DSL dataset snap map 8 66K 16K 96K 12K 4.12 0.00 DSL props - - - - - - - DSL dataset - - - - - - - ZFS znode - - - - - - - ZFS V0 ACL 10.6M 1.33T 921G 1.03T 99.3K 1.47 99.94 ZFS plain file 20 10K 2K 64K 3.20K 5.00 0.00 ZFS directory 2 1K 1K 32K 16K 1.00 0.00 ZFS master node - - - - - - - ZFS delete queue - - - - - - - zvol object - - - - - - - zvol prop - - - - - - - other uint8[] - - - - - - - other uint64[] - - - - - - - other ZAP - - - - - - - persistent error log 18 2.25M 200K 816K 45.3K 11.52 0.00 SPA history - - - - - - - SPA history offsets - - - - - - - Pool properties - - - - - - - DSL permissions - - - - - - - ZFS ACL - - - - - - - ZFS SYSACL - - - - - - - FUID table - - - - - - - FUID table size - - - - - - - DSL dataset next clones - - - - - - - scan work queue - - - - - - - ZFS user/group/project used - - - - - - - ZFS user/group/project quota - - - - - - - snapshot refcount tags - - - - - - - DDT ZAP algorithm - - - - - - - DDT statistics - - - - - - - System attributes - - - - - - - SA master node 2 3K 3K 32K 16K 1.00 0.00 SA attr registration 4 64K 16K 64K 16K 4.00 0.00 SA attr layouts - - - - - - - scan translations - - - - - - - deduplicated block 19 17.5K 14K 336K 17.7K 1.25 0.00 DSL deadlist map - - - - - - - DSL deadlist map hdr - - - - - - - DSL dir clones 2 256K 8K 48K 24K 32.00 0.00 bpobj subobj 26 528K 128K 768K 29.5K 4.12 0.00 deferred free - - - - - - - dedup ditto 35 51K 22K 432K 12.3K 2.32 0.00 other 10.6M 1.33T 922G 1.03T 99.3K 1.48 100.00 Total
Block Size Histogram
block psize lsize asize size Count Size Cum. Count Size Cum. Count Size Cum. 512: 19 9.50K 9.50K 19 9.50K 9.50K 0 0 0 1K: 22 23.5K 33K 22 23.5K 33K 0 0 0 2K: 0 0 33K 0 0 33K 0 0 0 4K: 91.4K 366M 366M 19 76K 109K 0 0 0 8K: 323K 3.07G 3.42G 0 0 109K 90.1K 721M 721M 16K: 895K 20.2G 23.6G 7.49K 120M 120M 645K 12.6G 13.3G 32K: 1.92M 87.5G 111G 1 36K 120M 1.75M 74.3G 87.6G 64K: 3.53M 311G 422G 0 0 120M 3.94M 365G 453G 128K: 3.90M 499G 922G 10.6M 1.33T 1.33T 4.22M 602G 1.03T 256K: 0 0 922G 0 0 1.33T 0 0 1.03T 512K: 0 0 922G 0 0 1.33T 0 0 1.03T 1M: 0 0 922G 0 0 1.33T 0 0 1.03T 2M: 0 0 922G 0 0 1.33T 0 0 1.03T 4M: 0 0 922G 0 0 1.33T 0 0 1.03T 8M: 0 0 922G 0 0 1.33T 0 0 1.03T 16M: 0 0 922G 0 0 1.33T 0 0 1.03T
capacity operations bandwidth ---- errors ---- description used avail read write read write read write cksum zpool1 1.03T 993G 20.1K 0 190M 0 0 0 0 raidz1 1.03T 993G 20.1K 0 190M 0 0 0 0 /dev/mapper/35000c50030116a17 1.82K 0 17.3M 0 0 0 6 /dev/mapper/35000c5003011de37 1.83K 0 17.3M 0 0 0 14 /dev/mapper/35000c5003011dfdf 1.82K 0 17.3M 0 0 0 14 /dev/mapper/35000c500301302b3 1.83K 0 17.3M 0 0 0 6 /dev/mapper/35000c5003013046f 1.83K 0 17.3M 0 0 0 6 /dev/mapper/35000c5003013047f 1.83K 0 17.3M 0 0 0 6 /dev/mapper/35000c5003013049f 1.83K 0 17.3M 0 0 0 6 /dev/mapper/35000c500301304af 1.82K 0 17.3M 0 0 0 6 /dev/mapper/35000c50030130607 1.82K 0 17.3M 0 0 0 4 /dev/mapper/35000c5003013070b 1.82K 0 17.3M 0 0 0 14 /dev/mapper/35000c500301306b7 1.83K 0 17.3M 0 0 0 14
Is there any hope for zpool import -T <TXG> poolName not throwing one or more devices is currently unavailable on a healthy regularly-importable pool?