zetcd icon indicating copy to clipboard operation
zetcd copied to clipboard

Memory/leak issue

Open matthewmrichter opened this issue 6 years ago • 9 comments

We're using zetcd (running in a container - from the quay.io/repository/coreos/zetcd tag v0.0.5) as a middleware between etcd and Mesos. We're experiencing a behavior where the memory usage of the zetcd seems to continue to climb and climb and climb gradually. It was overflowing a 4gig ram instance very quickly, so moved it to a host with 8 gigs, but the zetcd container still seems to continue to grow and grow in memory usage.

I'd be interested in helping solve this.. is there anything that I can provide to help expose the memory leak? Is there any automatic garbage collecting or anything like that that can be implemented? Are there any docker container launch parameters to contain its hunger for memory?

matthewmrichter avatar May 24 '18 15:05 matthewmrichter

Hmm, do you see the same behavior with v0.0.4?

ref. https://github.com/coreos/zetcd/compare/v0.0.4...v0.0.5

gyuho avatar May 24 '18 16:05 gyuho

Yep..

On Thu, May 24, 2018, 12:58 PM Gyuho Lee [email protected] wrote:

Hmm, do you see the same behavior with v0.0.4?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/coreos/zetcd/issues/97#issuecomment-391786818, or mute the thread https://github.com/notifications/unsubscribe-auth/AERhLEN_JOvXzwaG0K-51MVX3aIctl8fks5t1uacgaJpZM4UMdcE .

matthewmrichter avatar May 24 '18 17:05 matthewmrichter

It would be best if you can provide reproducible steps. And also try to heap-profile zetcd.

gyuho avatar May 24 '18 17:05 gyuho

I'm new to go, could you provide some guidance on enabling heap-profiling?

matthewmrichter avatar May 24 '18 17:05 matthewmrichter

@matthewmrichter Please enable profile via zetcd --pprof-addr flag.

And do something like

go tool pprof -seconds=30 http://zetcd-endpoint/debug/pprof/heap

go tool pprof ~/go/src/github.com/coreos/etcd/bin/etcd ./pprof/pprof.localhost\:2379.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
go tool pprof -pdf ~/go/src/github.com/coreos/etcd/bin/etcd ./pprof/pprof.localhost\:2379.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz > ~/a.pdf

Where you need to replace */bin/etcd binaries with the zetcd binary.

I would first try to reproduce without containerization.

gyuho avatar May 24 '18 18:05 gyuho

Great, I'll put some time into that. Thanks so far

matthewmrichter avatar May 24 '18 18:05 matthewmrichter

Ok, I think the main offender here may actually be Marathon(https://mesosphere.github.io/marathon/), not Mesos. The usage really shoots up when Marathon starts.

I converted zetcd to run as a service rather than containerized. I took a heap profile - this is shortly after startup. It already blasts up to 5 gig shortly after startup. I will keep an eye on htop for a little while to see if it begins to approach 7+ gig as well and provide another profile.

a.pdf

Steps to reproduce -

  1. Build etcd cluster or endpoint (separate autoscaling group behind a load balancer)
  2. On mesosmaster server, install zetcd. Configure to point to remote etcd cluster load balancer as etcd endpoint.
  3. On mesosmaster server, install mesos, marathon. (marathon::version : '1.4.8-1.0.660.el7', mesos::version: '1.6.0-2.0.4')
  4. Configure mesos and marathon to run with localhost:2181 (zetcd port) as zookeeper URL

matthewmrichter avatar May 25 '18 18:05 matthewmrichter

I gave it a while, and the process according to htop had gotten up to 6 gigs. Here's a second profile aftertaken at this point, looks mostly the same:

b.pdf

matthewmrichter avatar May 25 '18 20:05 matthewmrichter

Here's a qustion, based on the bottleneck being in that "ReadPacket" method..

Currently, I have etcd running on server A and marathon/zetcd on server B. Would it make more sense for zetcd and etcd to live on server A together rather than having zetcd reach out to etcd across the LAN?

matthewmrichter avatar May 29 '18 15:05 matthewmrichter