docker-wyze-bridge icon indicating copy to clipboard operation
docker-wyze-bridge copied to clipboard

Huge bandwidth hog

Open rpelletierrefocus opened this issue 4 years ago • 89 comments

As soon I I start up this container I start having issues access the internet and other IP cameras (non-wyze) start to intermittently go up and down. I stop the container and everything goes back to normal. I do not see other people complaining of this issue, but thing is completely consuming the bandwidth on my LAN and out through the WAN on my network. This is despite the fact I have the cameras set to SD30.

Any ideas?

rpelletierrefocus avatar Nov 22 '21 15:11 rpelletierrefocus

Unfortunately, that is one of the cons of having a hub-less IoT device.

You can limit the bridge to accessing the cams over your LAN by setting net mode to LAN only: NET_MODE=LAN

mrlt8 avatar Nov 23 '21 04:11 mrlt8

I do have it set to LAN mode, but that made no difference.

On Nov 22, 2021, at 11:16 PM, mrlt8 @.***> wrote:



Unfortunately, that is one of the cons of having a hub-less IoT device.

You can limit the bridge to accessing the cams over your LAN by setting net modehttps://github.com/mrlt8/docker-wyze-bridge#LAN-Mode to LAN only: NET_MODE=LAN

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/mrlt8/docker-wyze-bridge/issues/241#issuecomment-976156219, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALJUIHGTRXHC46EBOBUVEPDUNMISVANCNFSM5IRKAUYA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

rpelletierrefocus avatar Nov 23 '21 08:11 rpelletierrefocus

As soon I I start up this container I start having issues access the internet and other IP cameras (non-wyze) start to intermittently go up and down. I stop the container and everything goes back to normal. I do not see other people complaining of this issue, but thing is completely consuming the bandwidth on my LAN and out through the WAN on my network. This is despite the fact I have the cameras set to SD30.

Any ideas?

How many cams do you have? I filtered down to one cam (from 7 or 8) and seems to be stable now, like you all my WLED, espHome and tuya local devices were going unavailable constantly when I fired it up and they returned to being happy after I shut er down. So far so good with filtering down cameras but still hesitant it's not worth it even if it slows it down a bit with one camera. Guess we shall see if the add-on stays or goes...

ballakers avatar Jan 24 '22 18:01 ballakers

I've got a bit more information to add after doing some testing on my wired backhaul Unifi AP system. I was using both this container and motioneye in docker to test and figure out what I wanted to do for NVR and homeassistant and noted that the unifi console shows about 40% channel utilization with 3 cams and the motioneye container. image

When I start this bridge, it's not bandwidth that skyrockets but channel utilization. This causes behavior on the network that would easily be misconstrued as bandwidth consumption but in reality seems to be incredibly high packet counts. image Eventually the wifi experience for any access point with more than 2 cameras plummets and becomes virtually unusable on the 2.4GHz band. I haven't bothered to wireshark the traffic in both scenarios but I hope this points someone in the right direction. I also dont mind running a few tests if need be but freetime is scarce these days.

maxfield-allison avatar Mar 16 '22 16:03 maxfield-allison

When you get a chance, could you compare the results to streaming multiple cameras in the app as a group? Would be interested to see if we might be able to tweak some stuff to at least be on par with the app.

mrlt8 avatar Mar 17 '22 00:03 mrlt8

This is after 24 hours of not running the bridge, motioneye streaming all of my cameras and the wyze app opened to the group of cameras attached to the kitchen AP. for clarity I have them all locked to this AP image It looks like a packet cap is going to be necesarry to see why theres so much traffic with the bridge container. I'll run one when i have a few minutes, might be monday before i can get to it.

I just checked again and this is after a few minutes of streaming to the wyze app as well as motion eye image

A few minutes more image

Seems like the utilization spikes might be from other devices but the packet cap will tell us more.

maxfield-allison avatar Mar 17 '22 18:03 maxfield-allison

Hey @maxfield-allison, made some tweaks in the dev branch that could potentially help with channel utilization.

Would appreciate some feedback when you have a chance!

mrlt8 avatar Mar 20 '22 16:03 mrlt8

I'll pull the image in a bit and give it a spin.

maxfield-allison avatar Mar 20 '22 16:03 maxfield-allison

image Still seeing the same behavior. This is also starting to look like the same behavior as #221 and #278. to reiterate im also on the RTSP firmware. anecdotally i was seeing the same behavior on stock firmware before i swapped. heres a capture of the logs from the container, ignore the test cam it is in fact offline image

maxfield-allison avatar Mar 21 '22 18:03 maxfield-allison

I also have been trying to solve this. I have about 12 Wyze cameras on my LAN a mix of v2 and v3. I used to run a tinycam webcam server on an Android VM and feed them to HASS. I never had any issues with wifi load or congestion.

I did have issues with delay and VM stability so I decided to flash the cameras with the RSTP firmware and still had no issues with wifi but missed having a sill images.

So I decided to try motioneye. I added one camera and it worked flawlessly with less delay and great features. However, after adding 5 or so cameras I noticed things on my wifi started dropping like flies. WLED, Nest Speakers, etc. Looking at ubiquity dashboard I saw that all of those cameras were running full throttle sending multiple Mbps over the wifi even when they weren't being viewed. At first, I thought this was a result of having motion detection on, but after disabling everything it still hammered the wifi with heavy transmit. One thing I found bizarre was turning off motioneye didnt stop the streams, I had to reboot the AP or the cameras to get them back to normal. I don't know if its because I was using UDP but it was like motioneye just asked them to all blindly transmit as much as they could and then they turned into zombies.

Then I tried the Docker wyze bridge. But I am seeing very similar results as motioneye. I just turn on the bridge and all of a sudden all of my cameras are transmitting multiple Mbps over the wifi 100% of the time even when no one is looking at a stream.

I don't understand why there is a need to have the cameras sending data even when no one is calling for it. I realize that it asks for a still image every few minutes, but that should be the only data being asked for until someone pulls up one of the cameras on a dashboard.

MrKuenning avatar Apr 08 '22 06:04 MrKuenning

That definitely sounds like a bug. I'm running motioneye and my 6 cameras are on the RTSP Firmware. Without motioneye or HASS pulled up and the integration set up, this is all I'm seeing: image

As soon as I open the motioneye webpage: image

The traffic spikes but only on the home network im viewing from and the server which is running motioneye in docker. even then, not a ton of traffic. you may have to tune your motioneye settings to 20fps and 1920x1080 but i dont think that can account for the other devices dropping off. are you certain the wyze bridge isnt restarting? I had that issue from my old compose file. you may also try running the dev cversion as was suggested a few posts ago

maxfield-allison avatar Apr 08 '22 13:04 maxfield-allison

@mrlt8 I'm setting up a mirror on my kitchen AP switch port this weekend and doing a wireshark Pcap with the bridge running so we can see whats happening. I'm gonna guess multicast or broadcast traffic is the culprit somehow but I'll post my findings when I can.

maxfield-allison avatar Apr 08 '22 14:04 maxfield-allison

@MrKuenning Thanks for the data points! I've never actually used tiny cam as I don't have any android devices, but I believe they use the same tutk SDK as us so performance should be similar. Would you mind sharing which Android VM you use, so I can test things with my cams?

Also, when you say that you "never had any issues with wifi load or congestion", is that with TinyCam streaming from all 12 cams simultaneously or on-demand from each cam?

The bridge is constantly streaming from the cams by design, as I believe most users (including myself) feed our streams into some type of object detection system for motion detection and automation.

On demand streaming is something that I've been wanting to do, especially since the outdoor cams are battery powered, but there are some issues that need to be sorted out before we can get it working.

mrlt8 avatar Apr 08 '22 15:04 mrlt8

The android VM was just android x86 installed on ESXi. I used to just use an old phone lying around for it. After doing more testing I found other interesting results.

When I used tinycam, it used the wyze api to connect to the cameras and then relayed them using its own compression. So tinycam never used the RTSP firmware.

Last night I was doing some tests. I added one of my wyse cameras in HASS using ffmpeg over UDP.

When I connect to it via VLC I see it jump to 1-2Mbps and then stop immediately when I close vlc. When I connect to the camera in home assistant I see it jump to 1-2Mbps but when I close the stream or even the browser, I see the camera still using 1-2Mbps long afterward. (At this point it has been 15 minutes)

I don't know if its a bug with the RSTP firmware, or ffmpeg failing to stop. But is seems like with the exception of VLC any time a tool calls for a wyse cam RTSP stream it fails to tell the stream it's no longer needed and please stop sending data.

Tinycam server and the Wyse app can have 12 cameras in them and only call for them when they are needed. But other tools seem to make the initial call and then the cameras just get stuck hammering the network.

MrKuenning avatar Apr 08 '22 23:04 MrKuenning

We are pretty much doing the same thing as tinycam - using the TUTK SDK library to pull the h264 stream from the cameras and copy it to an RTSP stream, which can then be pulled from a third-party integration like Homebridge/Home Assistant at any time.

The RTSP firmware, on the other hand, is publishing the RTSP stream directly on the camera so the stream only gets pulled whenever needed.

However, many, including myself, need the bridge to provide a constant stream as we are actively processing the stream in an object/motion detection systems.

As mentioned before, on-demand streaming is a planned feature.

What @maxfield-allison is trying to figure out is why, on a one-to-one comparison between the bridge and the official app, the bridge is consuming more of the wifi channel.

mrlt8 avatar Apr 09 '22 15:04 mrlt8

haven't forgotten, just got busy this weekend. still planning on packet cap edit to mention i want it streaming 24x7 as well for motion detection.

maxfield-allison avatar Apr 13 '22 17:04 maxfield-allison

Haven't gotten around to the wireshark yet but I did find some interesting information. While browsing other issues I saw mention of the continuous recording and notification/detection settings on the cams and how they may be affecting things. As soon as I opened the settings on the 2 v3 cams and started turning down detection sensitivity, people detection and other "pro" features, and turning off continuous recording, Unifi reported the same massive channel utilization I was seeing when starting the bridge. I'm looking into it further but im feeling like the issue is either related to the age of the RTSP firmware or the new detection features.

maxfield-allison avatar Apr 27 '22 19:04 maxfield-allison

I've got the packet cap but haven't analyzed it yet. From first glance, tons of additional udp traffic from the cams to the server.

maxfield-allison avatar Apr 27 '22 23:04 maxfield-allison

@mrlt8 I have the packet cap files. I can provide them to you raw if you'd like to dig through. They capture everything on my security camera network both with and without the bridge running. not too worried about obfuscating it as long as I can DM you the drive link or something. All I really see that may be causing the issue is the stream data coming from the camera itself. it looks like maybe using UDP to connect locally is flooding the network and taking up all the airtime at least on the access points with more than 1-2 cameras. to note: the access point that is having the problem on my network is one of 3, and it's a UAP-AC-LR. if you're unfamiliar it's more for long range open air communication and doesn't have the throughput capabilities of the other two, a UAP-HD-Nano and a UAP-AC-Pro. I'm also up for scheduling some time on a weekend or a weekday evening to hop in a call or just bounce back and forth on a thread here to do some live testing and troubleshooting. Let me know what works for you, no rush and thanks again for your work. This project is super awesome.

maxfield-allison avatar May 07 '22 15:05 maxfield-allison

Thanks for the detailed info!

I don't think I could do much with the captures, but I'll try to dig through the TUTK library to see if it's possible to somehow force it to use TCP or make some other connection adjustments.

mrlt8 avatar May 07 '22 15:05 mrlt8

I'm also going to do some further testing to see if it's that access point in particular and if so, what factors contribute. might move one of the more capable stations to its spot and see if I have the same behavior.

maxfield-allison avatar May 07 '22 15:05 maxfield-allison

I tested a few changes in configuration, from lan to p2p, storm control on the AP ports at 100pps for broadcast and multicast. changed Docker container network mode to host from bridge. No major changes in behavior observed. I noted that the container causes 98% channel utilization with about 60-odd percent being Rx traffic and 1.5Mbps. With only the rtsp stream from the firmware, I'm at about 30% utilization with 22% Rx at 4 Mbps. I went ahead and pulled a debug log as well and focused in on my deck cam but all it tells me is that a connection is established then dropped ad nauseum. Next things I'm going to try are removing all but a single cam from operation on that access point and then swapping out the AC-LR for an AC-Pro just to kick the can a bit further. I'm hesitant to flash the newest wyze firmware over the RTSP but if we dont get any farther and I have a rainy day, I'll probably bother with it.

maxfield-allison avatar May 08 '22 02:05 maxfield-allison

hmm, I've been trying to get IOTC_TCPRelayOnly_TurnOn to work, but it kept going to relay mode until I switched from bridge to host mode.

Not sure if it helps, but you can test it out by setting IOTC_TCP in your env with the dev branch/images.

Unfortunately, Docker desktop (at least on MacOS) doesn't support host mode, so you'll need to run it on an os that supports host mode.

mrlt8 avatar May 08 '22 05:05 mrlt8

All good, I've been on linux for awhile. I'll give it a shot in a few

maxfield-allison avatar May 08 '22 06:05 maxfield-allison

So bizarre. Same behavior but the logs are a bit different. heres a snippet of a few lines here and there, most repeat several times a second.

2022/05/08 06:37:11 [py.warnings][WARNING][Deck Cam] WARNING: Skipping smaller frame at start of stream (frame_size=1) 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet 2022/05/08 06:37:17 [wyzecam.iotc][DEBUG][Outdoor Cam] Connect via IOTC_Connect_ByUID_Parallel 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet

still lots of traffic I'm just not sure where or why. I'm starting to think that it isnt your app and more likely an interaction with specific networking situations. I just dont know enough to be able to find the root cause.

maxfield-allison avatar May 08 '22 06:05 maxfield-allison

That is strange. Did it connect over a relay server mode: 1 (0: P2P mode, 1: Relay mode, 2: LAN mode) under SInfoStructEx?

mrlt8 avatar May 08 '22 07:05 mrlt8

That is strange. Did it connect over a relay server mode: 1 (0: P2P mode, 1: Relay mode, 2: LAN mode) under SInfoStructEx?

mode 2 under info structure

2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=29 code=10009 txt_len=701>: b'{"connectionRes":"1","cameraInfo":{"videoParm":{"type":"H264","bitRate":"120","resolution":"1","fps":"20","horizontalFlip":"2","verticalFlip":"2","logo":"1","time":"1"},"settingParm":{"stateVision":"1","nightVision":"2","osd":"1","logSd":"1","logUdisk":"1", "telnet":"2","tz":"-5"},"basicInfo":{"firmware":"4.61.0.3","type":"camera","hardware":"0.0.0.0","model":"WYZE_CAKP2JFUS","mac":"7C78B21AD8FF","wifidb":"89"},"channelResquestResult":{"video":"1","audio":"1"},"recordType":{"type":"3"},"sdParm":{"status":"1","capacity":"29652","free":"1093","detail":"0"},"uDiskParm":{"status":"2","capacity":"0","free":"0"},"apartalarmParm":{"type":"0","startX":"25","longX":"50","startY":"25","heightY":"50"}}}',

2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] SEND <K10056SetResolvingBit code=10056 resp_code=10057> <TutkWyzeProtocolHeader prefix=b'HL' protocol=1 code=10056 txt_len=3> b'\x01x\x00',
2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=29 code=10057 txt_len=1>: b'\x01',

2022/05/08 06:44:31 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] No longer listening on channel id 0,
SInfoStructEx:,
	size: 156,
	mode: 2,
	uid: b'FNJRTB64R21G5EW2111A',
	remote_ip: b'192.168.15.207',
	remote_port: 33944,
	tx_packet_count: 55,
	rx_packet_count: 149,
	iotc_version: 50399986,
	vendor_id: 49193,
	product_id: 62209,
	group_id: 61763,
	local_nat_type: 3,
	remote_nat_type: 3,
	net_state: 1,
	remote_wan_ip: b'0.0.0.0',
2022/05/08 06:44:31 [WyzeBridge][INFO][Deck Cam] [videoParm] {'type': 'H264', 'bitRate': '120', 'resolution': '1', 'fps': '20', 'horizontalFlip': '2', 'verticalFlip': '2', 'logo': '1', 'time': '1'},

maxfield-allison avatar May 09 '22 16:05 maxfield-allison

Ok, Removed the Driveway and Deck v3 cams by turning them off in the wyze app so only the sunroom v2 cam is running.

#      - NET_MODE=P2P
#      - QUALITY=HD120
      - ENABLE_AUDIO=True
      - DEBUG_LEVEL=debug
      - FPS_FIX=true
      - IOTC_TCP=true
    network_mode: host
image

Adding back the deck cam

image

removed the deck cam via app power off, image

and added the driveway cam image

removed all cams and added only the deck v3 cam image

then added the driveway v3 cam image

So it definitely looks like adding any more than 1 of the v3 cams running rtsp firmware to a unifi AP-AC-LR is a bad idea. And at this point I'm starting to lean more towards the access point, the rtsp firmware being so old for the v3 (they took it down from their site recently btw), or some combo. Still some weird behavior and it only shows up when the bridge is active but im getting more convinced its coincidental, not causal.

maxfield-allison avatar May 09 '22 16:05 maxfield-allison

I've got a nest hello doorbell, a V2 and a Pan v1 connected to a unifi UAP-HD-Nano (along with laptops and phones galore) and its sitting at just over 30% utilization on a different wifi channel. I did try changing the channels around but that didnt make any difference either. I have another rtsp v3 sitting here, I'm going to add it to the HD-NANO really quick and see if I get the same broken behavior on that AP. According to the previous tests, it should send utilization up to the 90% range. this ap has more antennas though so maybe it wont be quite as high. but that could narrow it down to the antenna configuration of the access points in these instances. still pretty crazy utilization and I'm sure there's more optimization to be done on the bridge's end, but this is a general problem with wifi iot devices regardless.

maxfield-allison avatar May 09 '22 16:05 maxfield-allison

I think the problem is incapable access points being overloaded by higher intensity applications than they're designed for. Added the rtsp v3 test cam to motion eye and connected it to the nanohd with all those other devices image barely sweating now. Upped the frame rate in motion eye to 20fps and the resolution to 1080p and it sometimes jumps to 60% but hovers around 40% utilization.

So again, maybe more optimization can be done on the bridge side of things but i dont have a good idea as to how to accomplish that besides implementing some pretty aggressive compression. I'm assuming the bridge is what decodes the single cam stream into multiple formats and serves them but if its the camera offering every stream type separately, maybe adding switches to disable what isn't needed?

maxfield-allison avatar May 09 '22 17:05 maxfield-allison