ipyparallel icon indicating copy to clipboard operation
ipyparallel copied to clipboard

Static memory used by engines

Open sahil1105 opened this issue 2 years ago • 2 comments

It seems like the engines use a lot of "static" memory, i.e. when we start the engines, they take up noticeable memory even when nothing has been executed. In our testing, we started 16 engines across 2 nodes, and as soon as we did, the memory usage on each node went from ~150MB to ~1.2GB, i.e. it seems like 8 engines consume ~1.05GB static memory. This is of course both the engine and nanny process, but based on htop , almost all of it seems to come from the engine process. ~134MB static memory usage per engine seems a bit high. Is there a way we could reduce this?

sahil1105 avatar Apr 19 '22 15:04 sahil1105

I imagine this is mostly attributable to imports. I can run some import profiling to see if there are any that can be delayed to avoid unnecessary usage.

I suspect the biggest source is imports in the top-level ipyparallel.__init__ that may not be used, and we can try to optimize those out.

Instantiating IPython itself uses quite a bit, so I'm not sure how much that can be brought down without IPython's help. Possibly turning off some features.

It may be worth bringing up IPython's own memory usage over there.

minrk avatar Apr 21 '22 07:04 minrk

Thanks @minrk, that makes sense.

sahil1105 avatar Apr 23 '22 22:04 sahil1105