carbonapi icon indicating copy to clipboard operation
carbonapi copied to clipboard

Memory limit

Open ihard opened this issue 8 years ago • 4 comments

I encountered a problem when a very large amount of metrics from the disk is requested. The process carbonapi tries to cache them, I see that the virtual memory of the process grows to the maximum available and then the process drops. Parameter memsize tried to expose in 10, 1000 - it does not affect anything. The process still tries to occupy all available memory. How can this be limited?

ihard avatar Mar 11 '17 08:03 ihard

Sounds like we need to not bother caching items that are larger that the total allowed cache size

dgryski avatar Mar 11 '17 10:03 dgryski

Command line /usr/sbin/carbonapi -cpus 8 -graphite 127.0.0.1:2003 -i 1m -p 8082 -memsize 100 -z http://127.0.0.1:8084

There are results of pprof go tool pprof /usr/sbin/carbonapi /tmp/memory.profile Entering interactive mode (type "help" for commands) (pprof) top10 22326.01MB of 22401.66MB total (99.66%) Dropped 249 nodes (cum <= 112.01MB) Showing top 10 nodes out of 18 (cum >= 232.53MB) flat flat% sum% cum cum% 22085.81MB 98.59% 98.59% 22085.81MB 98.59% github.com/dgryski/carbonzipper/carbonzipperpb.(*FetchResponse).Unmarshal 135.01MB 0.6% 99.19% 135.01MB 0.6% github.com/dgryski/carbonzipper/carbonzipperpb.(*GlobMatch).Unmarshal 55.84MB 0.25% 99.44% 190.85MB 0.85% github.com/dgryski/carbonzipper/carbonzipperpb.(*GlobResponse).Unmarshal 39.51MB 0.18% 99.62% 22162.11MB 98.93% main.renderHandler.func1 9.84MB 0.044% 99.66% 232.53MB 1.04% main.renderHandler 0 0% 99.66% 22085.81MB 98.59% github.com/dgryski/carbonzipper/carbonzipperpb.(*MultiFetchResponse).Unmarshal 0 0% 99.66% 232.53MB 1.04% github.com/gorilla/handlers.(*combinedLoggingHandler).ServeHTTP 0 0% 99.66% 232.53MB 1.04% github.com/gorilla/handlers.(*cors).ServeHTTP 0 0% 99.66% 232.53MB 1.04% github.com/gorilla/handlers.CompressHandlerLevel.func1 0 0% 99.66% 232.53MB 1.04% github.com/gorilla/handlers.combinedLoggingHandler.ServeHTTP

ihard avatar Mar 11 '17 11:03 ihard

I think it's not really related to cache, but just related to unmarshal.

He's trying to fetch 90k metrics for some significant amount of time and that makes Unmarshal to allocate insane amounts of memory (basically we need approx. 10 bytes per point after unmarshalling at this moment)

Civil avatar Mar 11 '17 11:03 Civil

We have experienced similar issues with large fetches on both carbonapi and the merge function for carbonzipper. Moving all the serialization to a streaming format is basically the only solution to avoid unpacking at once.

dgryski avatar Mar 25 '17 05:03 dgryski