multi-model-server icon indicating copy to clipboard operation
multi-model-server copied to clipboard

Issue: Memory Leak when serving multiple models

Open pratikluitel opened this issue 3 years ago • 4 comments

Description

I am encountering a memory leak when serving multiple MXNet models behind the same endpoint in multi-model-server.

I am using 2 docker containers, with the multi-model-server docker image, and serving 4 models in each container. Here are the relevant parts of my docker compose file: image

The issue

There is massive memory leak. One would expect the memory to clear after each inference, but it keeps on adding and adding until the multi-model-server stops.

This issue does not occur when I use separate containers to serve each model, serving one model per container, like so: image

Only 500MB memory is consumed per model in this case, which does not increase at all on multiple inferences. But when serving multiple models, each inference uses extra memory, and the memory does not clear at all. The multi model server crashes after it runs out of memory.

pratikluitel avatar May 09 '22 10:05 pratikluitel

I have encountered this as well. +1

chinge55 avatar May 16 '22 05:05 chinge55

In my case, even when I separated the models into multiple containers; the memory was still leaking.

chinge55 avatar May 16 '22 05:05 chinge55

update: seems like this is an issue with multi model server itself. there was no memory leak when serving these models using a flask server. Hope this is fixed soon

pratikluitel avatar May 30 '22 04:05 pratikluitel

The issue seems still there. Is this tool still in maintenance?

lrbsunday avatar Oct 20 '23 03:10 lrbsunday