cloudstack
cloudstack copied to clipboard
Live Storage migration between ceph on Ubuntu 20, does not work
Hello, Team I tring use Live Migration with Volume Migration with Shared Storage (Ceph) on KVM (Ubuntu 20). But it's failed with:
2022-12-02 09:58:27,651 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-92:ctx-136c4df4 job-442/job-443 ctx-6b9bcc45) (logid:ca6829a0) Failed to migrated vm VM instance {id: "87", name: "i-2-87-VM", uuid: "0780ecb5-a878-4553-8d94-8538a3fff08c", type="User"} along with its volumes. Can't find strategy to move data. Source Host: ceph-osd01, Destination Host: ceph-osd02, Volume UUIDs: c252edd8-5119-4d70-b3b6-65bca49beeba
I'm tring migrate Vm from one Ceph Pool to Another Ceph pool. Both in same Zone and Cluster CloudStack 4.17.
Thanks for opening your first issue here! Be sure to follow the issue template!
@happyalexkg is this something you were able to do in older versions? I am not sure if this was ever implemented.
cc @wido @weizhouapache do you know if we ever supported live VM migration with storage, from one ceph pool to another ceph pool. I think if it's not in the same cluster, live VM migration won't work. @happyalexkg can you see if cold migration (VM off) works?
@rohityadavcloud @happyalexkg I have ever seen this error when live migrate vm on a ceph storage to another ceph storage. it works between NFS storage pools.
Technically Ceph can do a RBD migration from one to another Ceph storage pool, but this has to be in the same cluster. I don't think CloudStack supports this though.