glusterd2
glusterd2 copied to clipboard
stale sockets fds under brick process
Observed behavior
Seeing stale sockets fds under brick process with brick-mux enabled. If I restart the glusterd2 service then the sockets count are getting increased with a count of number of volumes.
Right now on my system I have 100 pvc and I am seeing 419 brick sockets on a node. [root@gluster-kube1-0 11720]# ll /proc/$(pgrep glusterfsd)/fd | wc -l 419
Expected/desired behavior
Stale socket fds should be cleaned.
Details on how to reproduce (minimal and precise)
- Create a 3 node gcs system using valgrind.
- Create 100 pvc.
- Enabled brick-mux.
- Stop/start the volume.
- Check the brick process sockets using "ll /proc/$(pgrep glusterfsd)/fd"
Information about the environment:
- Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.94.git601ba61
- Operating system used: Centos 7.6
- Glusterd2 compiled from sources, as a package (rpm/deb), or container:
- Using External ETCD: (yes/no, if yes ETCD version): yes; version 3.3.8
- If container, which container image:
- Using kubernetes, openshift, or direct install:
- If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: Kubernetes
var-run-glusterd2-bricks-pvc-381a5faf-0e6e-11e9-af0b-525400f94cb8-subvol1-brick3-brick.log.gz var-run-glusterd2-bricks-pvc-381a5faf-0e6e-11e9-af0b-525400f94cb8-subvol1-brick2-brick.log.gz var-run-glusterd2-bricks-pvc-381a5faf-0e6e-11e9-af0b-525400f94cb8-subvol1-brick1-brick.log.gz kube3-glusterd2.log.gz kube2-glusterd2.log.gz kube1-glusterd2.log.gz