FunASR
FunASR copied to clipboard
docker deploy memory will not be released after continuous recognition
OS: linux(CentOS Linux release 7.8.2003 (Core))
Python/C++ Version:Python-3.8.18
Package Version:pytorch-wpe(0.0.1)、torchaudio(2.1.0)、modelscope(1.9.2)、funasr(0.8.0)
16-core vCPU, 32G memory
Model: speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx punc_ct-transformer_zh-cn-common-vocab272727-onnx speech_fsmn_vad_zh-cn-16k-common-onnx
Command:
nohup bash run_server.sh
--download-model-dir /workspace/models
--vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx
--model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx
--punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
--certfile 0
--decoder-thread-num 16
--io-thread-num 2
--port 10095
--itn-dir thuduj12/fst_itn_zh > log.out 2>&1 &
Details: Docker deployment, using Java client for continuous offline speech recognition, incomplete memory release, resulting in memory overflow and service restart.
Video duration is: 15 minutes, 30 minutes, 1.5 hours, 4 hours, The sampling rate is 16000.
Using any audio, continuous recognition will cause memory overflow.
Memory is initially 12.2%:
After the above four audio recognitions are completed, the memory reaches 40%:
Continue speech recognition, memory overflow:
docker:registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.3.0
I also encountered the same problem, did you solve it?
Would you please test it with cpp-client? We do not good at java.
Would you please test it with cpp-client? We do not good at java.
Thanks for the answer. According to my test, It is not a client request problem. When I use cpp、html or python client requests, the problem is also that the memory continues to grow and is not released. Please test whether this bug exists.
Try the newest version funasr-runtime-sdk-cpu-0.4.3. If the issue persists, please reopen the issue and provide detailed steps to reproduce, as well as server and client logs.