snips-nlu icon indicating copy to clipboard operation
snips-nlu copied to clipboard

engine cleanup, non-deterministic memory usage and memory leaking

Open Birne94 opened this issue 6 years ago • 1 comments

I am currently building a snips-based nlp service which will dynamically load new models. While in production I noticed a steady increase in memory each time a new model is loaded. Memory was measured using the memory-profiler module.

To Reproduce

from memory_profiler import profile
from snips_nlu import SnipsNLUEngine
from snips_nlu.common.utils import check_random_state


def create_engine():
    with open('my-engine.zip', 'rb') as fs:
        return SnipsNLUEngine.from_byte_array(
            fs.read(), random_state=check_random_state(0)
        )


engines = {}


def load_engine(name):
    if name in engines:
        del engines[name]
    engines[name] = create_engine()


@profile
def test():
    load_engine('test')


create_engine()

test()
test()
test()


Expected behavior

Since the old engine is explicitly removed from the collection, I would expect it to be garbage collected. The total consumed memory should stay constant.

Instead, the memory grows steadily and in non-deterministic increments (output for each call to test()).


Line #    Mem usage    Increment   Line Contents
================================================
    25     85.8 MiB                @profile
    26                             def test():
    27    126.8 MiB     41.1 MiB       load_engine('test')



Line #    Mem usage    Increment   Line Contents
================================================
    25    126.8 MiB                @profile
    26                             def test():
    27    144.9 MiB     18.1 MiB       load_engine('test')



Line #    Mem usage    Increment   Line Contents
================================================
    25    144.9 MiB                @profile
    26                             def test():
    27    174.1 MiB     29.2 MiB       load_engine('test')

Environment:

  • OS: macOS High Sierra (10.13.5) in development
  • OS: alpine linux in production
  • python version: 3.7
  • snips-nlu version: 0.20.1

Question:

On a quick glance I could not find any global state within snips that might still hold references to the engine, did I miss something here? Is there any recommended way of cleaning up an engine once it is not used anymore?

Even an explicit gc.collect() after the old engine has been removed still results in memory being leaked.

Why does the amount of memory used vary so much?

Birne94 avatar Sep 24 '19 13:09 Birne94

Hi @Birne94 , Thanks for reporting this. I am not sure that the increase of memory that you see is caused by the snips-nlu library directly. My guess, but that would need to be investigated further, is that some dependencies are using various caches that are growing a bit.

I have tested that using the following script (which is almost exactly the same as yours):

import gc
from memory_profiler import profile

from snips_nlu import SnipsNLUEngine


def create_engine():
    return SnipsNLUEngine.from_path("nlu_engine", random_state=0)


engines = {}


def load_engine(name):
    engines[name] = create_engine()


@profile
def test():
    load_engine('test')
    engines.clear()
    gc.collect()


test()
# repeat 100 times
test()

When running this, I reach a point where the memory consumption remains stable. This led me to think that the increase at the beginning is the result of caching.

adrienball avatar Sep 30 '19 10:09 adrienball