rmadaka
rmadaka
Gluster volume status output not consistent on gd2 pods, after delete/reboot of gd2 pod on gcs setup
### Observed behavior After delete/reboot of any one gd2 pod, login to any other gd2 pod and check the volume status. volume status output is keep on changing. ```[root@gluster-kube3-0 /]#...
### Observed behavior ```[root@gluster-kube1-0 /]# glustercli volume info Volume Name: pvc-9e0f4b21-f162-11e8-86db-525400ba128f Type: Replicate Volume ID: b84f4c58-7b5a-4c40-9b59-014de1276781 State: Started Capactiy: 10 PB Transport-type: tcp Options: Number of Bricks: 3 Brick1: gluster-kube1-0.glusterd2.gcs:/var/run/glusterd2/bricks/pvc-9e0f4b21-f162-11e8-86db-525400ba128f/subvol1/brick1/brick...
suddenly i have faced this issue, not getting exact reproducible steps. glustercli volume stop rep3 Volume stop failed Error: Request failed with HTTP Status code 500 etcdserver: requested lease not...
-> If we stop the external etcd services, glusterd2 services will be in running state but glustercli commands will not work. -> If we bring back the external etcd services...
### Observed behavior ``` glustercli volume create rep10 replica 3 dhcp35-219.lab.eng.blr.redhat.com:/bricks/brick0/rep10 dhcp35-33.lab.eng.blr.redhat.com:/bricks/brick0/rep10 dhcp35-194.lab.eng.blr.redhat.com:/bricks/brick0/rep10 --create-brick-dir Error getting brick UUIDs could not find UUIDs of bricks specified ``` ### Expected/desired behavior Volume...
### Observed behavior ``` glustercli volume create rep2 --replica=3 10.70.35.219:/bricks/brick0/rep2 10.70.35.194:/bricks/brick0/rep2 10.70.35.33:/bricks/brick0/rep2 --create-brick-dir Volume creation failed Response headers: X-Gluster-Cluster-Id: b3cb82b2-a247-40ac-ba83-d79b3846ca1b X-Gluster-Peer-Id: 828fc89c-371c-4f8c-a2d2-dfb472151106 X-Request-Id: 2d6bd361-9515-4cd3-a5bc-cd0ec6601d0b Response body: Transaction step vol-create.ValidateBricks failed...
If we want to deploy multiple gcs setups on same hypervisor, we need to edit vagrant file for providing new vm-names. I think we can have some better option to...
Currently by default with vagrant script we are getting 3 node gcs setup (gd2 container cluster). Its not possible to extending the peers with current gcs setup. But in OCS...
**Describe the bug** Written a script like, it will create pvc and it will wait for pvc to get bound. Then it will write into file time taken for pvc...
**Describe the bug** Creating 100 pvcs using script with 30 secs gap , few pvcs are failed to create. **Steps to reproduce** -> Create 100 pvcs uisng script then observe...