glusterd2
glusterd2 copied to clipboard
Volume status shows PID as 0 for few volumes
Observed behavior
Volume status shows PID as 0 for few volumes.
Expected/desired behavior
Brick process should be running on that brick and volume status should show the PID of the brick process.
Details on how to reproduce (minimal and precise)
- Create a 3 node GCS setup using vagrant.
- Create 500 PVCs.
- Check glustercli volume status
Information about the environment:
- Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.114.gitd51f60b
- Operating system used: Centos 7.6
- Glusterd2 compiled from sources, as a package (rpm/deb), or container:
- Using External ETCD: (yes/no, if yes ETCD version): yes
- If container, which container image:
- Using kubernetes, openshift, or direct install:
- If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: kubernetes
glusterd2 statedump: statedump_new_2.txt
one such output,
[root@gluster-kube1-0 /]# glustercli volume status pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e Volume : pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | BRICK ID | HOST | PATH | ONLINE | PORT | PID | +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | cec910f3-850c-449a-9937-d9d14f3253b5 | gluster-kube3-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e/subvol1/brick1/brick | true | 33359 | 14717 | | dc58d3df-7f55-4553-ae55-669a2bcfa7d0 | gluster-kube1-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e/subvol1/brick2/brick | false | 0 | 0 | | 01a9fcdc-8cef-40df-8447-5dad9a55f5bf | gluster-kube2-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e/subvol1/brick3/brick | false | 0 | 0 | +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+