framework
framework copied to clipboard
Evacuate storage router per vpool
As part of Fargo we added the option to move volumes away from a node/storagedriver to another node.
If you want to update or remove a node with a storagedriver it can be quite a task to move all volumes. hence let's introduce a maintenance mode
- API
- vdisksMoveAway()
- on storagedriver, moves the vDisks away to other storage drivers
- parameters
- dict of storage drivers where to move the volumes to (default all storagedrivers serving the vpool)
- how to move the volumes (currently 1 option: roundrobin)
- returns taskid
- vdisksMoveBack()
- on storagedriver, moves the vDisks back the storage driver
- parameters
- Dict of vDisks
- returns taskid
- setMaintance()
- Moves all vDisks away, moves as DTL targets away, moves all MDS away. Set a flag so you can't create new vDisks on the storagedriver. Once everything is moved set the status of the storage driver to maintenance (while moving all away status is "going into maintenance")
- parameters
- state: boolean
- checkState()
- to check if a storagedriver is in maintenance
- vdisksMoveAway()
- GUI
- on the the storage router detail page, add an action Maintenance (call the setmaintenance api for each vpool exposed on the storage router). Icon: http://fontawesome.io/icon/cogs/ .
- When in maintenance mode status of storage router should be in orange and clearly labelled on the detail page.
Question:
- maybe add domain as parameter to limit the selected storage drivers to a certain domain?
- Should we introduce something on the voldrv?