Prevent custom block volume sharing
Closes #12254.
Changes mostly based on lxc/incus#467 and lxc/incus#529.
This creates the security.shared config key for block volumes and adds checks for it during device creation and instance start.
This also includes checks for path definition when adding custom volume devices to profiles (filesystem volumes require a path and block volumes don't) and does some refactoring on the validateConfig function to make the disk validation process more readable.
Heads up @ru-fu - the "Documentation" label was applied to this issue.
@tomponline This version performs these checks when validating a profile that were exclusive for validating instances.
Additionally, do you think we can use the UsedBy atrribute on api.StorageVolume instead of calling VolumeUsedByInstanceDevices(as is done here) to check if the volume is being used by an instance to avoid making an unnecessary call to the database?
@tomponline just in case I wasn't clear, this is ready for another review when you have the time.
@tomponline I made the requested changes. Also, when you are able please address these topics.
@roosterfish Any volumes that were being shared before the update won't be affected until the one of the instances is started or the device is updated, at which point you could just set security.shared to true and then proceed. Also this is ready again in case you want to take another look.
What I am wondering is what happens with volumes which are currently attached to multiple instances as soon as you update to "this" version of LXD? Will there be an error in validation at some point if you don't allow
security.sharedfor this volume?
@roosterfish Any volumes that were being shared before the update won't be affected until the one of the instances is started or the device is updated, at which point you could just set security.shared to true and then proceed. Also this is ready again in case you want to take another look.
This is a problem because if you restart your system after snap has updated to this version you'll end up with potentially unstartable VMs. Which will then require manual intervention and debugging to figure out what has changed.
That isn't good UX in my view.
We should have a DB patch that adds security.shared=true to appropriate existing disk devices so they continue to operate as they did previously (allowing shared usage).
What do you think?
@tomponline I agree that this isn't good UX but I also think it is risky to set it for the user since the purpose of setting security.shared is stating that the user is aware of the risk of data loss on those shared volumes.
Maybe a good approach would be having the DB patch but somehow also showing a warning on update alerting for the risk of data loss on those volumes. What do you think about this?
@tomponline I agree that this isn't good UX but I also think it is risky to set it for the user since the purpose of setting
security.sharedis stating that the user is aware of the risk of data loss on those shared volumes. Maybe a good approach would be having the DB patch but somehow also showing a warning on update alerting for the risk of data loss on those volumes. What do you think about this?
Please could you update the PR with a description of the behaviour change in validation ready for review.
Will existing VMs only be not startable if they are have a volume that is attached to multiple instances?
@tomponline existing VMs won't start if they have a volume that is attached to more than one instance or to a profile.
@tomponline I removed some commits, this should be ready for review
@tomponline Requested changes made and pushed
@hamistao did you address @ru-fu suggestions yet on this? Thanks
@ru-fu Thanks for the suggestions, the changes were made. @tomponline Suggested changes were made, how should we proceed regarding having VMs with shared block volumes not starting after the update?