cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[cinder-csi-plugin, occm] Unable to multiattach PV across pods on different nodes
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug - Unable to multiattach PV across pods on different nodes
/kind feature
What happened:
Trying to attach PV to multiple pods in a kubernetes cluster on multiple worker nodes, we are using openstack for worker node VMs
storage class is set to "multiattach=
What you expected to happen: Should be able to attach to all the pods across all worker nodes
How to reproduce it:
- Create SC
- Create PVC
- Attach PVC to deployment and create it
- Scale up the pods, so that it schedules on all worker nodes
- Some pods will remain in containercreating state as the volume attachment fails
Anything else we need to know?: we are using cinder.csi.openstack.org as CSIDriver based on occm
Error log Events: Type Reason Age From Message
Normal Scheduled 29s default-scheduler Successfully assigned default/task-pv-deployment-77bcbf5fc6-48lph to workernode-2 Warning FailedAttachVolume 12s (x6 over 28s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-07f4902c-3ee0-4de9-b1be-7dd4bc19da93" : rpc error: code = Internal desc = ControllerPublishVolume Attach Volume failed with error disk 5a7764f0-d30c-44ca-a05d-b939173116b3 is attached to a different instance (06d963d3-d576-41e1-9bf6-162e179e571f)
Environment:
- openstack-cloud-controller-manager(or other related binary) version:
- OpenStack version: 5.5.0
- Others:
- kubernetes version 1.19.4, 1.23.1
- quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
- quay.io/k8scsi/csi-attacher:v2.1.1
- quay.io/k8scsi/csi-provisioner:v1.4.0
- quay.io/k8scsi/csi-snapshotter:v1.2.2
- quay.io/k8scsi/csi-resizer:v0.4.0
- docker.io/k8scloudprovider/cinder-csi-plugin:v1.18.0
- docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.18.0
@madeinindiadot: The label(s) kind/-, kind/unable, kind/to, kind/multiattach, kind/pv, kind/across, kind/pods, kind/on, kind/different, kind/nodes cannot be applied, because the repository doesn't have them.
In response to this:
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug - Unable to multiattach PV across pods on different nodes
/kind feature
What happened: Trying to attach PV to multiple pods in a kubernetes cluster on multiple worker nodes, we are using openstack for worker node VMs storage class is set to "multiattach=
True" But when we are using a deployment to attach across multiple worker nodes, it fails with error What you expected to happen: Should be able to attach to all the pods across all worker nodes
How to reproduce it:
- Create SC
- Create PVC
- Attach PVC to deployment and create it
- Scale up the pods, so that it schedules on all worker nodes
- Some pods will remain in containercreating state as the volume attachment fails
Anything else we need to know?: we are using cinder.csi.openstack.org as CSIDriver based on occm
Error log Events: Type Reason Age From Message
Normal Scheduled 29s default-scheduler Successfully assigned default/task-pv-deployment-77bcbf5fc6-48lph to workernode-2 Warning FailedAttachVolume 12s (x6 over 28s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-07f4902c-3ee0-4de9-b1be-7dd4bc19da93" : rpc error: code = Internal desc = ControllerPublishVolume Attach Volume failed with error disk 5a7764f0-d30c-44ca-a05d-b939173116b3 is attached to a different instance (06d963d3-d576-41e1-9bf6-162e179e571f)
Environment:
- openstack-cloud-controller-manager(or other related binary) version:
- OpenStack version: 5.5.0
- Others:
- kubernetes version 1.19.4, 1.23.1
- quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
- quay.io/k8scsi/csi-attacher:v2.1.1
- quay.io/k8scsi/csi-provisioner:v1.4.0
- quay.io/k8scsi/csi-snapshotter:v1.2.2
- quay.io/k8scsi/csi-resizer:v0.4.0
- docker.io/k8scloudprovider/cinder-csi-plugin:v1.18.0
- docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.18.0
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
have you tried through this? https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.