orca icon indicating copy to clipboard operation
orca copied to clipboard

Steadily increasing memory usage

Open scjody opened this issue 8 years ago • 4 comments

After a 2 hour run of all our mocks except gl* and mapbox* using test/image-make_baseline.js (351 mocks, taking about 55 seconds per run), against image-exporter running as an imageserver in a Docker container, memory usage grew steadily:

https://plot.ly/%7EJodyMcintyre/2209/ new imageserver memory usage

When the run was stopped, memory usage decreased a bit then leveled out.

Examination of ps results showed that two electron processes (probably the plot image and plot thumbnail processes) were responsible for the memory usage:

screen shot 2017-12-07 at 18 00 40

A similar issue was observed in a 12 hour run of 3 imageservers in our staging environment:

screen shot 2017-12-08 at 15 01 56

As a workaround for this issue we could restart Electron after a reasonably large number of requests (e.g. 1,000). I'm going to look into this but a fix for the root cause is needed at some point.

@etpinard @monfera FYI

scjody avatar Dec 08 '17 20:12 scjody

Note: this issue was first noted when we tried to use the new imageservers for Plotly Cloud prod. The problems encountered are discussed starting here: https://github.com/plotly/streambed/issues/9865#issuecomment-349995119

scjody avatar Dec 08 '17 20:12 scjody

I've worked around this with #47. The underlying issue should be investigated at some point.

scjody avatar Dec 14 '17 17:12 scjody

I was able to reproduce the error feeding sequential request of a large json file ( 4.7M ) and every single time at 233 request the process crash. The larger the file is the quicker the process crash.

  • it's not a resources issue as no message appear in dmesg nor a segfault.
  • I've tested with current version of electron and 2.0.9 same result.

I've tested with the following file : success/ee151de8-d07e-4d19-a2b6-400a2292c609_200.json from : https://github.com/plotly/streambed/issues/9865#issuecomment-360823004

command used to test: for i in {0..5000}; do curl -d @ee151de8-d07e-4d19-a2b6-400a2292c609_200.json http://localhost:9091/ ;echo $i;done

capture of orca's container when the 233rd requests happen:

12544ae95ac6 eager_banach 149.46% 2.447GiB / 4GiB 61.18% 0B / 0B 0B / 303kB 153

error message :

<--- Last few GCs --->

[44:0x2542baa0000] 286865 ms: Mark-sweep 2047.7 (2087.0) -> 2047.7 (2087.0) MB, 1668.0 / 0.0 ms allocation failure GC in old space requested [44:0x2542baa0000] 288532 ms: Mark-sweep 2047.7 (2087.0) -> 2047.7 (2087.0) MB, 1667.2 / 0.0 ms last resort [44:0x2542baa0000] 290208 ms: Mark-sweep 2047.7 (2087.0) -> 2047.7 (2087.0) MB, 1675.5 / 0.0 ms last resort

<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x1783e7b2d681 <JSObject> 0: builtin exit frame: parse(this=0x1783e7b13ec9 <JSON map = 0x38da26786249>,0x2a6c162ff499 <Very long string[1514304]>)

2: /* anonymous */ [/var/www/image-exporter/src/app/server/create-server.js:144] [bytecode=0x27ecb9d52d21 offset=51](this=0x631dc09bb61 <JSGlobal Object>,err=0x363dedf82201 <null>,_body=0x2a6c162ff499 <Very long string[1514304]>)
4: onEnd [/var/www/image-exporter/...

mag009 avatar Sep 13 '18 18:09 mag009

I would like to know if this issue still affects the latest docker images for Orca. cc @scjody

antoinerg avatar Jan 21 '20 20:01 antoinerg