Karan Sandha
Karan Sandha
Checking the volume status of the volume in gd2, the port number to few of the volume is set as -1 ### Observed behavior Created 90+ PVC and check the...
Cloning the issue in GD2 as per the dev's comment. Refer issue https://github.com/gluster/gluster-csi-driver/issues/101
Performance and other volume options exposed in volume info in gd2 ### Observed behavior ``` Volume Name: pvc-ffdff841-075d-11e9-b5d2-525400242e67 Type: Replicate Volume ID: 6fad1401-7bb3-4d3d-964d-eb60eee0e694 State: Started Capacity: 50.0 MiB Transport-type: tcp...
1) Create a GCS cluster. 2) Start a parallel PVC creation and wait for for all the PVC ``` 0s pvc-2d9eaa04-0f4c-11e9-8d2d-525400e329db 500Mi RWX Delete Pending default/gcs-pvc711 glusterfs-csi 0s pvc-2d9eaa04-0f4c-11e9-8d2d-525400e329db 500Mi...
1) Created 50 PVC parallel requests. 2) While the pvc are getting processed , rebooted second node from the 3 node GCS cluster. ``` [vagrant@kube1 ~]$ kubectl get pods -n...
Steps performed:- 1) Created the GCS cluster:- ``` [vagrant@kube1 ~]$ kubectl get pods -n gcs NAME READY STATUS RESTARTS AGE csi-attacher-glusterfsplugin-0 2/2 Running 0 2d21h csi-nodeplugin-glusterfsplugin-45snb 2/2 Running 0 2d21h...
cold reset of the cluster leads to ETCD pods going in ERROR state. 1) create a GCS cluster ``` [vagrant@kube1 ~]$ kubectl get pods -n gcs NAME READY STATUS RESTARTS...
PVC going in pending state after creating and deleting single PVC in loop. ``` [vagrant@kube1 ~]$ kubectl describe pvc Name: gcs-pvc1 Namespace: default StorageClass: glusterfs-csi Status: Pending Volume: Labels: Annotations:...