harvester
harvester copied to clipboard
Restore running VMs after upgrade in single node cluster
Problem: When you upgrade a single node it will turn off all VMs. And there was some talk of having the VMs going back to previous state.
Solution:
Add new setting restoreVMs
to UpgradeConfig
. This setting trigger harvester to store running vms namespace/name to secret before upgrade, and start those VMs based on the info stored in secret after upgrade. Note this setting only works when cluster is single node. About the paused VMs, we leave them to stopped state.
Related Issue: https://github.com/harvester/harvester/issues/4005
Test plan:
- Create iso based on this pr/branch
- Use ipxe to create single node cluster,
- Create 4 VMs with 2 running VMs, 1 stopped VM, and 1 paused VM.
- Apply the following yaml to update
restoreVM: true
in Setting upgrade-config
apiVersion: harvesterhci.io/v1beta1
value: '{"imagePreloadOption":{"strategy":{"type":"sequential"}}, "restoreVM": true}'
kind: Setting
metadata:
name: upgrade-config
- Use the following yaml to upgrade harvester, note that the iso in isoURL should be the same one build in step1.
apiVersion: harvesterhci.io/v1beta1
kind: Version
metadata:
name: master
namespace: harvester-system
spec:
isoURL: http://192.168.100.1:8000/harvester-master-amd64.iso
---
apiVersion: harvesterhci.io/v1beta1
kind: Upgrade
metadata:
annotations:
harvesterhci.io/skip-version-check: "true"
harvesterhci.io/skipWebhook: "true"
generateName: hvst-upgrade-
namespace: harvester-system
spec:
version: "master"
logEnabled: false
- Verify all VMs were running before upgrade are still in running state after upgrade. And the paused vm before upgrade are stopped after upgrade.