Karan Sandha

Results 22 comments of Karan Sandha

I don't think so i have less space . As creating 1 gb wouldn't take space. I have a plenty of space there in my machine. I have the setup...

``` [root@gluster-kube1-0 /]# curl -X GET http://10.233.65.5:24007/v1/devices [{"device":"/dev/vdc","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"75813145-bbe3-4761-929d-0ab7b096ef08"},{"device":"/dev/vdd","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"75813145-bbe3-4761-929d-0ab7b096ef08"},{"device":"/dev/vde","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"75813145-bbe3-4761-929d-0ab7b096ef08"},{"device":"/dev/vdc","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"c6bbc9f0-008d-4dc0-8859-d554fa2f3fa7"},{"device":"/dev/vdd","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"c6bbc9f0-008d-4dc0-8859-d554fa2f3fa7"},{"device":"/dev/vde","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"c6bbc9f0-008d-4dc0-8859-d554fa2f3fa7"},{"device":"/dev/vdc","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"d34f9e43-e21c-4544-b5ea-7fce001b9fbd"},{"device":"/dev/vdd","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"d34f9e43-e21c-4544-b5ea-7fce001b9fbd"},{"device":"/dev/vde","state":"enabled","free-size":1099373215744,"total-size":1099373215744,"used-size":0,"extent-size":4194304,"device-used":false,"peer-id":"d34f9e43-e21c-4544-b5ea-7fce001b9fbd"}] ```

[glusterd2_port.log](https://github.com/gluster/glusterd2/files/2656475/glusterd2_port.log)

``` [root@gluster-kube2-0 /]# glustercli volume status Volume : pvc-16238f77-fc49-11e8-b05d-52540001ce00 +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | BRICK ID | HOST | PATH | ONLINE | PORT | PID | +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | 6b10e8f5-1f66-4ba2-9cb0-29026fb5a0b8 | gluster-kube3-0.glusterd2.gcs |...

[attacher.log](https://github.com/gluster/glusterd2/files/2675585/attacher.log) [provisioner.log](https://github.com/gluster/glusterd2/files/2675586/provisioner.log) hitting this issue at 758 pvc

[glusterd2_758pvc.log](https://github.com/gluster/glusterd2/files/2675658/glusterd2_758pvc.log)

1) Started 1000 pvc with a script in sequential order 2) At 758PVC were stuck and rest all in pending state. attched the logs in above comments. 3) started to...

@atinmu I will again try this scenario tonight and update you by tomorrows scrum. Last time when we tried to scale we were able to scale upto 716 PVC.

Created the parallel PVC creation on non brick mux environment. Below is the state of cluster once the pvc reached 548 ``` [vagrant@kube1 ~]$ kubectl get pods -n gcs NAME...

@rishubhjain can you please login to rhsqa-virt05.lab.eng.blr.redhat.com (root/GCS-karan/deploy) I see the glusterd2 logs are 78MB in size which I wont be able to attach it here.