open_model_zoo
open_model_zoo copied to clipboard
Memory Leak observed in text_to_speech model demo sample
Summary
We were trying to leverage the openvino text to speech demo sample as inferencing server to which input is fed continuously, we observed that with each input text the memory consumption increases and eventually causing segment fault which results in machine crash. After debugging we found that the scope of inference network object still remain and thereby become a garbage object.
Question
Is there a optimized way to clear the inference network before passing the next input so that it can fit our use-case?
Simple way to reproduce it
Giving the continuous input in for-loop