zfs-localpv
zfs-localpv copied to clipboard
utilize zfs replication to migrate PVs between nodes
zfs replication is pretty unique tool in the block storage world of OSS, why not utilize it for something awesome like that.
Some annotation to PVC, that would trigger a snapshot/replicate/delete source/ modify or recreate PV object.
@artw you mean asynchronous replication? Why not use the velero and create the schedule to take the periodic backup?
Check this to migrate the pv to the other node https://github.com/openebs/zfs-localpv/blob/master/docs/backup-restore.md#restore-on-a-different-node.
@artw you mean asynchronous replication? Why not use the velero and create the schedule to take the periodic backup?
Check this to migrate the pv to the other node https://github.com/openebs/zfs-localpv/blob/master/docs/backup-restore.md#restore-on-a-different-node.
I do, and it works quite well, thanks.
It would be much more effective to move the data between nodes directly utilizing the openebs-zfs-node daemonset.
To think of it further, you could even do it regulary to have HA PVs mirrored between all (or some) nodes 🤔
@artw, sorry, I missed this thread. So basically, we need a data migration framework from one node to other without using velero. This can be done easily, as matter of the fact, velero already uses node daemonsets to backup/restore the data. We just need one operator which can help us to migrate the data from one node to other. This feature is in our roadmap.
@pawanpraka1 It would be awesome to have something like syncoid integrated. This would make synchronization and moving PVs between nodes a breeze. We are using kubevirt which does not support live migration of local PV volumes but with something like syncoid it would require a quick reboot only. We're currently doing this for backups (snapshot replication of the PVs to dedicated backup servers).
There is PR about the subject, which is not yet linked from here, nor does itself link to here:
- #336
It could possibly be considered a follow-up to @pawanpraka1 previous comment above.