FunASR
FunASR copied to clipboard
memory leak in funasr-runtime-sdk-cpu-0.4.4
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
🐛 Bug
- Use docker Version: registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.4
- Run /workspace/FunASR/runtime/run_server.sh as referred to https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline_zh.md#python-client.
- Execute python3.8 funasr_wss_client.py --host 127.0.0.1 --port 10095 --ssl 1 --mode offline --audio_in "xxx.mp4". Here, "xxx.mp4" is a long video, more than 30 minutes in duration. After step 3, the funasr-wss-server process's memory usage increases with each audio recognition. This increase seems to be related to the size of the audio.
To Reproduce
Steps to reproduce the behavior (always include the command you ran):
-
Run cmd in my macOS terminal
sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.4 mkdir -p ./funasr-runtime-resources/models sudo docker run -p 10095:10095 -it --privileged=true -v $PWD/funasr-runtime-resources/models:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.4
-
Enter contanier from terminal
docker exec -it contanierId sh
-
Lanuch server
cd /workspace/FunASR/runtime nohup bash run_server.sh > log.txt 2>&1 &
-
Query by python client. "xxx.mp4" is a long video, more than 30 minutes in duration
cd /workspace/FunASR/runtime/python/websocket/ python3.8 funasr_wss_client.py --host 127.0.0.1 --port 10095 --ssl 1 --mode offline --audio_in "xxx.mp4"
The funasr-wss-server process's memory usage increases with each audio recognition. Using the "xxx.wav" audio file for over 30 minutes will result in a similar reault. You can see memory usage with top
command.
Code sample
No
Expected behavior
Solving the memory leak issue in the FunASR Offline File Transcription Software Package.
Environment
- OS (e.g., Linux): MacOS
- FunASR Version (e.g., 1.0.0): 1.0.13, also have tried 1.0.25
- ModelScope Version (e.g., 1.11.0): 1.12.0
- PyTorch Version (e.g., 2.0.0): 2.2.1
- How you installed funasr (
pip
, source): installed in official image - Python version: 3.8.18
- GPU (e.g., V100M32): use cpu
- CUDA/cuDNN version (e.g., cuda11.7): use cpu
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1): funasr-runtime-sdk-cpu-0.4.4
- Any other relevant information: No
Additional context
No
We're having this on GPU as well. Memory keeps increasing after each inference.