cloudstack
cloudstack copied to clipboard
Error Detaching Volume on GlusterFS Primary Storage
ISSUE TYPE
- Bug Report
COMPONENT NAME
Primary Storage (GlusterFS)
CLOUDSTACK VERSION
4.17.1.0
CONFIGURATION
OS / ENVIRONMENT
CentOS Linux 7 (Core)
qemu-img version 2.12.0
glusterfs 6.1
SUMMARY
Can't detach volumes created on GlusterFS primary storage from VM. Tested on two different cloudstack installations. Everything works fine with other primary storage types. The volume can be detached and removed if the VM status is stopped but not when running.
STEPS TO REPRODUCE
- Create GlusterFS Primary Storage with tag
- Create Disk Offering that uses just crated GlusterFS Primary Storage's tag.
- Create Volume using just created Disk Offering
- Attach Volume to VM
- Detach Volume from VM
EXPECTED RESULTS
- Success detaching Volume
ACTUAL RESULTS
- (test-detach) Failed to detach volume test-detach from VM test-vm-for-gluster ; com.cloud.exception.InternalErrorException: disk: /mnt/b440b605-a876-35f8-be84-2b5696650654/5605035d-e060-42a6-89a1-6f9c949463fc is not attached before
Thanks for opening your first issue here! Be sure to follow the issue template!
@kalik1 , I know of no ACS installation using GlusterFS . Can you add more details if you need help. Alternatively send a mail to the users mail list.
@kalik1 is this a genuine error or a false positive, i.e. is the volume indeed not detached or is the error message misleading? Also as I don´t, do you have anyway of investigating the issue?
@DaanHoogland this is a genuine error. i didn't find a solution even inspecting logs. I had to select another primary storage solution (NFS in my case) which works as expected.
I think it could be related to how libvirtd handles GlusterFS volumes, but I'm not expert in this. I tried to look around in libvirt/QEMU logs, but I couldn't find nothing.
How can I help? What infos/log can I post to investigate further?
k1
Ok, @kalik1 I think the first thing is to look at the libvirtd project and ask around there if the issue is known. Another thing to do is to try and dismount the volume directly in kvm/qemu/libvirtd and see if that fails in the same way (as to eliminate/determine libvirtd as the culprit indeed) any logs could be helpful, of course but only if it contains clear error messages.
good luck and good hunt, happy to help but I am afraid that won't be much.