LXD recover doesn't use storage volume defaults logic
The cli for recovering a ZFS pool queries me for the pool name, dataset and then requires me to specify zfs.pool_name=$dataset again in the key-values section.
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: y
Name of the storage pool: default
Name of the storage backend (btrfs, dir, lvm, zfs): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): zroot/data/lxd_pools/default
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
- NEW: "default" (backend="zfs", source="zroot/data/lxd_pools/default")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:
Scanning for unknown volumes...
Error: Failed validation request: Failed mounting pool "default": Cannot mount pool as "zfs.pool_name" is not specified
@tomponline what do you think about this one? Definitely feels odd that we'd need to set the key doing recovery when it normally gets set automatically during create.
@stgraber I'm not opposed to it as it would improve the UX.
It would require us to extract the defaults generation inside each pool driver's Create() function into a separate function that can be passed the user provided config at recovery time and generate the same defaults.
This is assuming that all pool driver's Create() function's defaults generation only depend on user provided config and don't use any other environmental factors. We would need to review and check.
This did crop up before internally and I think the position we took then was that we were going to be conservative about any "guessing" that LXD did at recovery time.
But I think its worth exploring if it can be made more intelligent and consistent with the create-time behaviour.
Additionally, I think the current way to reimport an existing pool (I know it's technically called disaster recovery, but when I reinstall an unclustered host in my homelab, I don't care enough to transfer the LXD database and/or it's inside a rats nest of snapd directories somewhere) is kinda weird: assuming I have the old lxd init preseed or a way to regenerate it from config management, i'll be
- running
lxd initto configure almost everything except storage - running
lxd recoverto import the old storage and instances - potentially running
lxd initagain from config management to see that the preseed is still valid or what the changes are
This whole procedure looks like it could be replaced with a lxd init --reuse-storage option to do these in one go if valid.
Not sure if that would obsolete, build on or be unrelated to the refactors @tomponline mentioned.