srs
srs copied to clipboard
SRS 5 and SRS 6 has a memory high bug that is not present in version 4.
Note: Please read FAQ before file an issue, see #2716
Description
Please description your issue here
-
SRS Version: docker 5 24b631a4ec00 (latest branch 5 image available)
-
SRS Log:
-
SRS Config:
srs ./conf/rtmp.conf
Replay
Please describe how to replay the bug?
Step 1: running ossrs with two rtmp inputs
RAM usage keeps incrementing until the host runs out of memory.
Expect
This scenario is not reproducible on ossrs version 4. Ram is always below 20mB.
Thank you for your support!
In the steps you described, you didn't do any actions and the memory usage keeps incrementing?
Simply leaving the container running, keeps incrementing the memory usage. After re instantiating the container to version 4, the memory usage is normalized.
If needed, I can get version 5 running again and collect logs. (before running version 4, I was not logging to journal)
It is necessary to verify and conduct some investigation regarding this bug.
Let me know if you need additional tests/information.
20MB does not indicate a memory leak; it simply demonstrates that SRS 5.0 utilizes more base memory than SRS 4.0, as the base memory consists of preallocated objects.
If the memory usage continues to increase after running for 24 or 72 hours, that would be indicative of a memory leak.
Please attempt to run SRS for an extended period, such as a week or a month, and employ Prometheus to record and plot the memory usage graph.
Hi,
I just restarted the container using SRS 5. I'll let you know the result by the end of the week.
Hi, after 30 minutes running the usage is as follows: CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 83fb5d530ad1 ossrs 1.44% 566.1MiB / 3.576GiB 15.46% 3.02GB / 21.1MB 0B / 8.19kB 2
I will report back in a few days.
Unfortunately I wasn't able to provide stats last week. Below is the output of docker stats: CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 83fb5d530ad1 ossrs 1.83% 969.3MiB / 3.576GiB 26.47% 904GB / 137GB 0B / 8.19kB 2
It is possible to observe memory usage rising.
I've tested my deployment from 4 to 6, and the memleak is still present.
After 5 minutes the docker stats shows the information below: CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS d37abeebfdd5 ossrs 2.59% 349.6MiB / 3.576GiB 9.55% 462MB / 215MB 221kB / 8.19kB 2
While docker stats shows that information, systemctl status ossrs.service shows the following:
● ossrs.service - ossrs container
Loaded: loaded (/etc/systemd/system/ossrs.service; enabled; preset: disabled)
Active: active (running) since Mon 2023-08-21 11:20:51 WEST; 5min ago
Main PID: 12110 (docker)
Tasks: 7 (limit: 23180)
Memory: 18.4M
CPU: 68ms
CGroup: /system.slice/ossrs.service
└─12110 /usr/bin/docker run --rm --name=ossrs -p 1935:1935 -p 1985:1985 -p 8080:8080 -e TZ=Europe/Lisbon ossrs/srs:6 ./objs/srs -c conf/rtmp.conf
Analyzing the unit file memory usage seems perfectly normal. May this be a docker issue?
TRANS_BY_GPT4
Please use the Prometheus Exporter to monitor the memory status of SRS. Could you provide a graph showing the changes in SRS memory over a period of more than 12 hours? You can refer to this link for guidance: https://ossrs.io/lts/en-us/docs/v5/doc/exporter
TRANS_BY_GPT4
This issue will be removed due to unclear description.