rmadaka
rmadaka
Gluster volume status output not consistent on gd2 pods, after delete/reboot of gd2 pod on gcs setup
Sorry for late , old setup went to some bad state, reproduced above scenario again and pasted logs below Logs: time="2018-11-02 12:40:56.356381" level=info msg="10.233.64.1 - - [02/Nov/2018:12:40:56 +0000] \"GET /ping...
Gluster volume status output not consistent on gd2 pods, after delete/reboot of gd2 pod on gcs setup
Providing output one more time: ```[vagrant@kube1 ~]$ kubectl -n gcs exec -it gluster-kube2-0 /bin/bash [root@gluster-kube2-0 /]# glustercli volume status --endpoints=http://10.233.9.177:24007 No volumes found [root@gluster-kube2-0 /]# glustercli volume status --endpoints=http://10.233.9.177:24007 No...
@atinmu currently working on gcs setup, we haven't seen this issue on gcs setup till now.
Create GCS setup with 16 vcpus and 32GB RAM for each kube node. Then try to create 1000 PVC using script. Each pvc size 1GB. Observation: -> I have tried...
Attached CSI provisioner log above
Sending logs through mail, log file size is more