vsphere-csi-driver
vsphere-csi-driver copied to clipboard
Post upgrade to CSI 2.4.2 from 2.2.0 - error "failed to get shared datastores in kubernetes cluster"
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened: Upgraded CSI driver from 2.2.0 to 2.4.2 after following the document https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-3F277B52-68CC-4125-AD0F-E7293940B4B4.html. After deployment when created a pvc getting below error
> Events:
> Type Reason Age From Message
> ---- ------ ---- ---- -------
> Normal WaitForFirstConsumer 10m persistentvolume-controller waiting for first consumer to be created before binding
> Warning ProvisioningFailed 10m csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166441 [VirtualCenterHost: 172.16.32.10, UUID: 422180a2-b068-15cb-501d-22fc1df1a0ad, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
> Warning ProvisioningFailed 2m11s (x7 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166439 [VirtualCenterHost: 172.16.32.10, UUID: 4221912b-cf62-5873-be79-1b215ef9dd36, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
> Normal Provisioning 53s (x11 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca External provisioner is provisioning volume for claim "default/pvc-demo-2"
> Warning ProvisioningFailed 53s (x3 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166440 [VirtualCenterHost: 172.16.32.10, UUID: 4221b82a-2b93-342f-7e63-4dec57b3784e, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
> Normal ExternalProvisioning 28s (x43 over 10m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
> ~
We were able to create PVC's in version 2.2.0, so this is not a permission issue.
This is our cloud config
[Global]
insecure-flag = "true"
user = <>
password = <>
port =
secret-namespace = "kube-system"
[VirtualCenter "172.16.32.10"]
datacenters = "sadc-npe-icon-dc"
[Labels]
region = k8s-regions
zone = k8s-zones
These are our nodes and the VM ID mentioned in the error are those of the master nodes.
> kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
Other info
> kubectl describe nodes | grep "ProviderID"
ProviderID: vsphere://4221912b-cf62-5873-be79-1b215ef9dd36
ProviderID: vsphere://4221ec20-7e72-4d83-321a-52abfcd760e0
ProviderID: vsphere://422180a2-b068-15cb-501d-22fc1df1a0ad
ProviderID: vsphere://4221dbbd-be70-6d99-4541-37124fd222ee
ProviderID: vsphere://4221b82a-2b93-342f-7e63-4dec57b3784e
ProviderID: vsphere://422179e1-53ad-cc38-b9ea-9fe01eaf32ba
>
> kubectl get CSINode
NAME DRIVERS AGE
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1 1 89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1 1 89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1 1 89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1 1 89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1 1 89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1 1 89d
>
> kubectl describe sc mongo-sc
Name: mongo-sc
IsDefaultClass: No
Annotations: <none>
Provisioner: csi.vsphere.vmware.com
Parameters: csi.storage.k8s.io/fstype=ext4,storagepolicyname=k8s
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
What you expected to happen:
I would expect the PVC to be created
How to reproduce it (as minimally and precisely as possible):
Create a K8s cluster with CSI 2.2.0 and upgrade the CSI to 2.4.2 as per the link above.
Anything else we need to know?:
Environment:
- csi-vsphere version: 2.4.2
- vsphere-cloud-controller-manager version:
- Kubernetes version: 1.21.15
- vSphere version: Version: 7.0.2
- OS (e.g. from /etc/os-release): Centos 6
- Kernel (e.g.
uname -a): - Install tools:
- Others:
@SrinivasMajeti make sure to specify username in the vSphere Config secret with domain name.
The vCenter Server username. You must specify the username along with the domain name. For example, user = "userName@domainName" or user = "domainName\username". If you don't specify the domain name for active directory users, the vSphere Container Storage Plug-in will not function properly.
@SrinivasMajeti Are you still hitting into this issue, or can we close the issue?
/assign @adikul30
@gohilankit: GitHub didn't allow me to assign the following users: adikul30.
Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @adikul30
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
Enabled flag for zone and region to fix the issue. For some reason it was commented out
Enabled flag for zone and region to fix the issue. For some reason it was commented out
@SrinivasMajeti could you please specify where you enabled these flags? (we have the same issue)
@vahidkhosh In CSI Node Deployment
Section - name: csi-provisioner image: k8s.gcr.io/sig-storage/csi-provisioner:v3.3.0 args: - "--v=4" - "--timeout=300s" - "--csi-address=$(ADDRESS)" - "--kube-api-qps=100" - "--kube-api-burst=100" - "--leader-election" - "--default-fstype=ext4" # needed only for topology aware setup #- "--feature-gates=Topology=true" #- "--strict-topology"