Atin Mukherjee
Atin Mukherjee
From the attached log: atin@dhcp35-96:~/Downloads$ grep -irns "volume started" glusterd2_port.log 770:time="2018-12-07 07:32:19.767744" level=info msg="volume started" reqid=4eeb98bc-351c-41cb-af67-224dbf9fd757 source="[volume-start.go:176:volumes.volumeStartHandler]" volume-name=pvc-38f2c390-f9f2-11e8-9e9c-52540044da85 771:time="2018-12-07 07:32:19.777919" level=info msg="volume started" reqid=24a30046-61dd-4b7d-b977-dd4466ee875b source="[volume-start.go:176:volumes.volumeStartHandler]" volume-name=pvc-395bce12-f9f2-11e8-9e9c-52540044da85 798:time="2018-12-07 07:32:22.441129" level=info msg="volume...
If you have the setup, can you please check if the pid 17355 is still running on gluster-kube2-0.glusterd2.gcs ? Also can you take the glusterd statedump and share the output?
> I am having trouble understanding how a list might help. The portmapper we use currently is kind of an in-memory multi list itself. portmapper is an in-memory data structure,...
Taking this out from GCS/1.0 tag considering we're not going to make brick multiplexing a default option in GCS/1.0 release.
I can't assign rmadaka this issue as he seems to be not part of the gluster.org in github. Pinged @kshlm about it.
Not a blocker for GCS/1.0 based on the revised MVP.
@aravindavk / @rishubhjain - can one of you please check this?
We haven't seen this in our multiple iterations of recent scale testing environment.
I believe this is already fixed in the latest master (or GCS 0.6) of GCS.
Having the same issue. @biaji What do you mean by 'downgrade node to 8?' Do you mean nodejs version to 8?