WebRTC
WebRTC copied to clipboard
High CPU Use
Hi,
Since I use the addon, I'm observed a masive and increasing cpu using. It starts at one determined time each morning and keeps rising until I restart home assistant. Then all be fine until next day.
Here you can see rtsp2webrtc_v4_ cpu use just before restart.
Do you have any idea about this?
same here.
This usually happens on Celeron processors. I don't know why. On the Raspberry 3 the CPU load is 10% with 4 cameras
Mine is an i3 proccessor, and load starts low with 4 cameras but when you see the image for a while begin to increase. Then, keep high until home assistant restart.
Ich have the same problem with a Intel Xeon E-2144G processor (Home Assistant on a proxmox VM).
I run Hass io on 2 Rpi's ( 1 raspberry pi 3b and 1 raspberry pi 4b) both give a CPU load of around +95% for 5 camera's.
Hi!
The same thing on i3 processor.
After restart HA it use CPU around 14-17%.
After some time and after cameras access - it increase up to 60%. I have 4 cameras in integration.
If you need more technical info (logs, htop, etc) - just say :)
I need answers from every one:
- How long does it take from opening the camera card in lovelace to a high CPU load?
- If you have multiple camera models - try to check one by one. Which model increase CPU and which doesn't
- Check if you use latest integration version
Configuration > Integrations > WebRTC > 3 dots > Reload will restart binary app
Here my findings to your questions:
- The CPU usage increases immediately after the stream has started (approx. 4-5% CPU usages per Stream on my Intel Xeon)
- The CPU usage only drops again when I restart the Host (then the CPU usage is approx. 0.5% until the stream starts again)
- I have several units of the same camera model
- I use the latest integration version
I've noticed that moving webRTC cards insice lovelace will cause the card to be reloaded adding CPU usage and not freeing previous used CPU (and memory). When configuring lovelace cards position is very easy to reach 100% CPU resulting in system halt (Raspberry Pi 3 here). It seems that at each card moving (with arrows) all the webRTC cards are reloaded. Also after openign my lovelace section with 6 ipCams, the CPU boost about 15% and is never free even if i change lovelace section (one with no webRTC)
I am having this same issue on a RPI 4. 1 camera, multiple users. Unifi G4 Doorbell. It takes about 12-24 hours to eventually slow down my system, but the WebRTC stream, NodeRed and Home Assistant start fighting for CPU each using about 33% (~1.25 cores each) of the RPIs computing power.
For now I am going to replace the WebRTC card with the default picture one and move the WebRTC one to a special view for it so it is not loaded in every device (camera was on the "main" dashboard).
I am thinking of moving NodeRed to is own RPI. Is there a way to offload the WebRTC server to a separate device?
@AngellusMortis I think it's better to find and fix the problem. There is some bug in code with certain cameras. I can't reproduce it with my cameras.
is there any solution yet?
If the problem cannot be repeated, it cannot be fixed. My raspberry 3b works without CPU issue with multiple cameras...
It uses quite a lot of CPU on my Home Assistant Blue, especially when watching one that is 8 Mbit/s 1440p @25fps. I tried lowering its bitrate to 1 Mbit/s while keeping it on 1440p@25fps and that reduced the CPU use to about a third so it seems bitrate related.
Would it be possible to run rtsp2webrtc on a different server instead of on the HA host? I already have a relatively powerful server that runs rtsp-simple-server and Frigate and many other things and wouldn't mind having a permanent rtsp2webrtc docker running on it if it could offload the Home Assistant host in some way.
We have the same issue on our i3-8300. I disabled the integration in the configuration tab and the process was immediately killed, including the corresponding cpu load. We have two cameras on one lovelace view. One is integrated using the rtsp url (Axis h264 Full HD), the other one is integrated via the entity id (Hikvision h264 320x240). The CPU usage of process " rtsp2webrtc_v4_" starts at around 4% the first time I open the lovelace page. It rises every other time I reload the page containing the webrtc-views. After about 10 times, it's at 18% already. It does not go down anymore, even after we close all HA clients.
I'd really love to use this integration, since it feels so much faster than other solutions.
Same issue for me. HA docker on ubuntu server 20.04. 3 reolink cameras at 1080p. CPU usage is around 20% when restarting the container. It rises steadily till around 40% shown here
Same issue for me using UniFi Cameras. Both CPU & Ram was running very high. Not stable enough to use at the moment...Going back to UniFi Protect integration which has a 5-10 second delay..but very stable and minimal CPU/Ram usage.
I have the same issue: NUC i3-6100U, 3 Reolink cameras using the Sub stream to WebRTC in Frigate cards. Having to reboot every other day as the process appears to gain resource without releasing it. Happens even faster with Background: True in the card. I use a Tablet with Fully browser that has the 3 cards on the Front page, but they do lose connection despite the frigate stream staying connected (RTMP vs RTSP). Are there any logs that might help?
Hello, Same problem with a single camera using the addon, after a few days HA crash (out of memory) VM Proxmox i7, 16GB fully dedicated to home assistant I uninstalled the addon, too bad because the direct was really direct with this addon whereas without I have a 10 second delay on the video ..
Hi all,
I've run into the same issue. Running Home Assistant Core 2021.12.10 on Docker. Machine runs an i7-7567U with 16GB RAM, no resource limiting on the container. 3 x HikVision DT363-28. WebRTC v2.2.0
It initially worked quite well for about a week, and now it will rarely connect to the RTSP feed, and when it does it is quite slow to make the initial connection. Refreshing pages rarely seems to have any effect. I do have 100 ports configured (50000 - 50099), should I reduce that?
I just re-installed the plugin and will come back here and update on how I progress, but over the last 5 minutes so far so good. I have suspicions that it may be getting caught in a rapid recursive loop somewhere. Is there any debug logging I can enable (even if it means switching a flag in a file somewhere)?
I have no knowledge of go, but I'm happy to do what debugging / profiling I can (if I can figure it out). Of course anything I discover, I'll come back and report on.
Hi,
I've resorted to a button on my Dashboard that restarts the WebRTC process. I'm thinking of automating this so that if CPU hits 100% for > 10 seconds that it kills and restarts WebRTC. Seems like a hammer for a nut though really.
On Wed, 9 Feb 2022 at 13:55, Michael Thompson @.***> wrote:
Hi all,
I've run into the same issue. Running Home Assistant Core 2021.12.10 on Docker. Machine runs an i7-7567U with 16GB RAM, no resource limiting on the container. 3 x HikVision DT363-28. WebRTC v2.2.0
It initially worked quite well for about a week, and now it will rarely connect to the RTSP feed, and when it does it is quite slow to make the initial connection. Refreshing pages rarely seems to have any effect. I do have 100 ports configured (50000 - 50099), should I reduce that?
I just re-installed the plugin and will come back here and update on how I progress, but over the last 5 minutes so far so good. I have suspicions that it may be getting caught in a rapid recursive loop somewhere. Is there any debug logging I can enable (even if it means switching a flag in a file somewhere)?
I have no knowledge of go, but I'm happy to do what debugging / profiling I can (if I can figure it out). Of course anything I discover, I'll come back and report on.
— Reply to this email directly, view it on GitHub https://github.com/AlexxIT/WebRTC/issues/64#issuecomment-1033785613, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARMPEBJUEGMXBZYYRM3PHBLU2JW33ANCNFSM45CJNKUQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you commented.Message ID: @.***>
I just ran into this. This is on a HA Blue device, with 6 cameras. The ramp was gradual but persistent. In my case, it was only one specific instance of the binary server causing problems.
This happened multiple times tonight for me, so I think this is a repeatable issue. If a debug version is released, I might be able to profile it?
Seems RTSPtoWebRTC is deprecated in favor of RTSPtoWeb. https://github.com/deepch/RTSPtoWeb
Opened https://github.com/AlexxIT/WebRTC/issues/258 to track.
I'm seeing this too. I'm only using one Eufy indoor pan and tilt camera (this model: https://www.amazon.com/dp/B0856W45VL). CPU usage on a Raspberry Pi 4 starts at ~3% CPU but over time it grows to ~50% CPU. The CPU usage is all coming from rtsp2webrtc_v5_armv7
Started noticing this as well. Using rtsp simple server as a proxy to my cameras. I am running 10+ cameras and have noticed that when working normally it works with 30-40% on a two core VM. Previously I had 32 cores allocated to the VM, and it used all available resources on that as well. Perhaps some sessions were never closed properly and continually run?
Same problem here, I'm running in a hyperv container, on a xeon v2 processor. All the webrtc streams are hooked up to crappy raspi and v380 cameras directly with rtsp urls, 6 cameras simultaneously on one lovelace panel.
system CPU use goes to 100% randomly, it smoothly ramps up from 0% to 100%, in the last day once it took 20 minutes, the other time it took 2 hours, very smooth ramp-up. I can log onto the container and the offending process is "rtsp2webrtc_v5_amd64". The HA system seems usable while this is happening so I have no idea how long this has been going on. Probably a while, I remember catching it at least once months ago but not knowing why it was at 100%. Even the cameras seem to still be working with the CPU stuck at 100%, so its possible this problem is under-reported. One thing I can do is scale back the number of virtual cores so at least it impacts the host machine less, but this is pretty annoying, it impacts other virtual machines running on the same hyperv host as the cpu resources are shared. It also might explain why my HA system seems a bit laggy and unstable all the time. I really want to keep this integration though because its the only one that lets me PTZ my cameras when im not home without 20 seconds of lag.
@AlexxIT we understand that you cant reproduce on a raspi but it seems that everyone with this problem is using x64 based platforms. Is there anything we can do to help investigate or fix this problem? We can all reproduce this it but we dont know how to help fix it.
My current workaround is a root cronjob to kill it every few hours. Not ideal, but it helps:
0 */3 * * * killall rtsp2webrtc_v5_armv7 > /dev/null
I'll likely put a CPU limit on the Docker container it's running in, too.
it seems that everyone with this problem is using x64 based platforms.
I'm hitting the issue on a Raspberry Pi 4B running 32-bit Raspberry Pi OS
I'm curious if this issue still occurs if rtsp2webrtc is ran directly. Since I'm not familiar with how exactly this component works, that would likely help to track down where the issue is.
it happened again and I ran "top" and "ps aux | grep rtsp" looks like every time I reboot the plugin I get a "defunct" process Nothing here looks interesting to me except 1 thing. the "mdns-repeater" process also goes to very high CPU until webrtc is reset. Not sure if thats a clue or anything.
been monitoring this for a few days now. doesnt seem to happen at night so far (when nobody looks at cameras). Does anyone else here use the android app or external access? we do all the time over here
Note that for me, a normal cpu level is 2 to 5% even when watching the cams, so in the screenshots its off the charts high
@timmeh87 I am using both the app and external access. Seems it's also working on internal access using the app. I can't confirm whether there is indeed a correlation between app usage and this issue however.