featurebase icon indicating copy to clipboard operation
featurebase copied to clipboard

OOM during node start, zero garbage allocations but still crushes

Open dmibor opened this issue 5 years ago • 4 comments

For bugs, please provide the following:

What's going wrong?

Next chapter of yesterday's OOMs story https://github.com/pilosa/pilosa/issues/1893. A bit of background: tried to add 1 more node to the cluster to temporary increase query load cluster can handle. While joining new node to the cluster one of the old nodes OOMed transferring shards to new node, couldn't capture profiles of this unfortunately, tried to restart shards transfer process several times without success, old cluster nodes kept OOMing one after another.

Now I have a old cluster node that cant start after above shards transfer crush - OOMs every time. What is interesting about this case is that there seems to be no garbage and memory pressure comes only from containers in heap and mmaped files:

(120GBs memory box)

$ go tool pprof /tmp/heap
File: pilosa
Type: inuse_space
Time: Mar 12, 2019 at 11:20am (+08)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top20
Showing nodes accounting for 86915.63MB, 99.76% of 87125.53MB total
Dropped 40 nodes (cum <= 435.63MB)
Showing top 20 nodes out of 31
      flat  flat%   sum%        cum   cum%
70103.85MB 80.46% 80.46% 70103.85MB 80.46%  github.com/pilosa/pilosa/roaring.NewContainer (inline)
16810.74MB 19.29% 99.76% 16810.74MB 19.29%  github.com/pilosa/pilosa/enterprise/b.glob..func1
    1.04MB 0.0012% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*view).openFragments
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Field).Open
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Field).Open.func1
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Field).openViews
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Holder).Open
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Index).Open
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Index).openFields
         0     0% 99.76% 87122.38MB   100%  github.com/pilosa/pilosa.(*Server).Open

$ go tool pprof -alloc_space /tmp/heap
File: pilosa
Type: alloc_space
Time: Mar 12, 2019 at 11:20am (+08)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top40
Showing nodes accounting for 86916.14MB, 99.66% of 87211.05MB total
Dropped 128 nodes (cum <= 436.06MB)
      flat  flat%   sum%        cum   cum%
70103.85MB 80.38% 80.38% 70103.85MB 80.38%  github.com/pilosa/pilosa/roaring.NewContainer (inline)
16810.74MB 19.28% 99.66% 16810.74MB 19.28%  github.com/pilosa/pilosa/enterprise/b.glob..func1
    1.55MB 0.0018% 99.66% 87129.97MB 99.91%  github.com/pilosa/pilosa.(*view).openFragments
         0     0% 99.66% 87129.97MB 99.91%  github.com/pilosa/pilosa.(*Field).Open
         0     0% 99.66% 87129.97MB 99.91%  github.com/pilosa/pilosa.(*Field).Open.func1
         0     0% 99.66% 87129.97MB 99.91%  github.com/pilosa/pilosa.(*Field).openViews

memory right before OOM looks like this: oom

profiles taken at the same time: profiles.gz

What was expected?

node can start

Steps to reproduce the behavior

dmibor avatar Mar 12 '19 04:03 dmibor

So it looks like you have about 919 million containers. That's... a heck of a thing.

I'm looking into reducing the size of the Container structure right now. I don't have an ETA yet, sorry, but if you don't mind crazy experimental code, I might have something next week. (Approximate scale: If my mad science works, we can reduce it from 80 bytes per Container to 24 or possibly 16.)

seebs avatar Mar 15 '19 20:03 seebs

So it looks like you have about 919 million containers. That's... a heck of a thing.

yeah, it's quite big - about 5000 shards and about 2TBs of data in roaring.bitmaps. Might be a new scale milestone for Pilosa to handle, looking forward to your container reduction ideas @seebs !

dmibor avatar Mar 18 '19 11:03 dmibor

should be fixed by containers work—let us know

jaffee avatar Apr 09 '19 15:04 jaffee

@jaffee we've recently loaded our dataset into Pilosa with new container implementation - looks promising at the first glance!

I want to gather up more info and post it after we've done major part of our tests.

dmibor avatar Apr 15 '19 05:04 dmibor