Prasad Desala

Results 19 issues of Prasad Desala

### Observed behavior Having a single PVC (without brick-mux enabled), reboot gluster-node-1 and post reboot, brick process on gluster-node-1 is not running. ``` [root@gluster-kube1-0 /]# glustercli volume status Volume :...

bug

### Observed behavior Brick logs are getting spammed with continuous "0-epoll: Failed to dispatch handler" error. [2019-01-02 09:58:49.149423] E [MSGID: 101191] [event-epoll.c:759:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler The above events...

### Observed behavior I tried deleting 250 PVCs in a sequential way using below script, for i in {1..250};do kubectl -n gcs delete pvc pvc$i;done Below are the observations, *....

### Observed behavior Volume status shows PID as 0 for few volumes. ### Expected/desired behavior Brick process should be running on that brick and volume status should show the PID...

glusterfs memory increased from 74MB to 6.8G while creating 200 PVCs. Before: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1150 root 20 0 3637200 74560...

Bricks are failing to connect to the volume post gluster node reboot. ### Observed behavior On a system having 102 PVCs with brick-mux enabled I rebooted gluster-kube1-0 pod. After sometime...

bug
priority: high
brick-multiplexing-issue

### Observed behavior sos reports is not collecting glusterd2, bricks, glustershd logs. ### Expected/desired behavior sos reports should collect all necessary gd2 logs required for debugging. ### Details on how...

FW: Logging
priority: medium

### Observed behavior Seeing stale sockets fds under brick process with brick-mux enabled. If I restart the glusterd2 service then the sockets count are getting increased with a count of...

bug
priority: medium
brick-multiplexing-issue

### Observed behavior On a brick-mux enabled setup, old brick process is still running after volume reset -> stop -> start. ### Expected/desired behavior Old brick process should not be...

bug
priority: low
brick-multiplexing-issue

### Observed behavior volume info shows the volume as started but volume status shows it as offline. [root@gluster-kube1-0 bricks]# glustercli volume status pvc-46967f93-0e6e-11e9-af0b-525400f94cb8 Volume : pvc-46967f93-0e6e-11e9-af0b-525400f94cb8 +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+------+-----+ | BRICK ID...

bug
brick-multiplexing-issue