ceph-csi
ceph-csi copied to clipboard
rbd: check volume details from original volumeID
Checking volume details for the existing volumeID first. if details like OMAP, RBD Image, Pool does not exist try to use clusterIDMapping to look for the correct pieces of information.
fixes: #2929
Signed-off-by: Madhu Rajanna [email protected]
Facing some CI issue and looks like its due to the rbd issue https://tracker.ceph.com/issues/54593
/retest all
/retest all
something wrong with CI restarting all tests
hmmm.. just wondering how it react to migrated volume handle.
hmmm.. just wondering how it react to migrated volume handle.
@humblec what is the concern here?
/test all
hmmm.. just wondering how it react to migrated volume handle.
@humblec what is the concern here?
looking some more into this, ideally we should be good, that said the static volume ID is untouched and should folow the code path as before, for new volume Handles only the change in effect . @Madhu-1 can you please double confirm ?
hmmm.. just wondering how it react to migrated volume handle.
@humblec what is the concern here?
looking some more into this, ideally we should be good, that said the static volume ID is untouched and should folow the code path as before, for new volume Handles only the change in effect . @Madhu-1 can you please double confirm ?
Yes this is not for static volume its for dynamic volume already getting called in DeleteVolume Code https://github.com/ceph/ceph-csi/blob/c3e35f88493ddd626f6818a48f30900d8166f4e2/internal/rbd/controllerserver.go#L762-L766
/retest ci/centos/mini-e2e-helm/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.23
/retest ci/centos/mini-e2e/k8s-1.21
/retest ci/centos/mini-e2e/k8s-1.23
`Mar 14 13:27:18.892: backend images not matching kubernetes resource count,image count 22 kubernetes resource count 21
backend image Info:
[csi-vol-61a6638c-a39a-11ec-b84e-46359091d2ce csi-vol-637f4854-a39a-11ec-b84e-46359091d2ce csi-vol-637f4854-a39a-11ec-b84e-46359091d2ce-temp csi-vol-63bdd485-a39a-11ec-b84e-46359091d2ce-temp csi-vol-63db0526-a39a-11ec-b84e-46359091d2ce csi-vol-63db0526-a39a-11ec-b84e-46359091d2ce-temp csi-vol-63f96934-a39a-11ec-b84e-46359091d2ce csi-vol-63f96934-a39a-11ec-b84e-46359091d2ce-temp csi-vol-643f9696-a39a-11ec-b84e-46359091d2ce csi-vol-643f9696-a39a-11ec-b84e-46359091d2ce-temp csi-vol-6457b69e-a39a-11ec-b84e-46359091d2ce csi-vol-6457b69e-a39a-11ec-b84e-46359091d2ce-temp csi-vol-6475acf7-a39a-11ec-b84e-46359091d2ce csi-vol-6475acf7-a39a-11ec-b84e-46359091d2ce-temp csi-vol-6488cc64-a39a-11ec-b84e-46359091d2ce csi-vol-6488cc64-a39a-11ec-b84e-46359091d2ce-temp csi-vol-64c087be-a39a-11ec-b84e-46359091d2ce csi-vol-64c087be-a39a-11ec-b84e-46359091d2ce-temp csi-vol-681b9d97-a39a-11ec-b84e-46359091d2ce csi-vol-681b9d97-a39a-11ec-b84e-46359091d2ce-temp csi-vol-6876ee6d-a39a-11ec-b84e-46359091d2ce csi-vol-6876ee6d-a39a-11ec-b84e-46359091d2ce-temp
` Failed due to known errors.
@mergifyio rebase
/test ci/centos/mini-e2e/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.21
/retest ci/centos/mini-e2e/k8s-1.21
@mergifyio rebase
rebase
✅ Branch has been successfully rebased
@mergifyio rebase
rebase
✅ Branch has been successfully rebased
This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏
E0322 05:03:30.732970 150121 utils.go:200] ID: 110 Req-ID: 0001-0024-4f124b93-57df-4ad4-a74c-e79193bc0971-0000000000000007-64d7469e-a99d-11ec-98e9-7af2c48ecac9 GRPC error: rpc error: code = Internal desc = error generating volume (0001-0024-4f124b93-57df-4ad4-a74c-e79193bc0971-0000000000000007-64d7469e-a99d-11ec-98e9-7af2c48ecac9): pool not found: pool ID(7) not found in Ceph cluster
frequently hitting this one. looks like its due to the rbd issue https://tracker.ceph.com/issues/54593
@Mergifyio rebase
rebase
✅ Branch has been successfully rebased
@Madhu-1 may be rebase ?
@mergifyio rebase
rebase
✅ Branch has been successfully rebased
Common failure:
Apr 4 07:53:17.339: failed to validate clones in different pool: creating PVCs and applications failed, 10 errors were logged
frequently hitting this one. looks like its due to the rbd issue https://tracker.ceph.com/issues/54593
frequently hitting this one. looks like its due to the rbd issue https://tracker.ceph.com/issues/54593 , will rebase once we update the base image with quincy.
frequently hitting this one. looks like its due to the rbd issue https://tracker.ceph.com/issues/54593
frequently hitting this one. looks like its due to the rbd issue https://tracker.ceph.com/issues/54593 , will rebase once we update the base image with quincy.
Ok, backport for Pacific is pending too ceph/ceph#45586. Should we track this as an issue?