WyzeHacks icon indicating copy to clipboard operation
WyzeHacks copied to clipboard

Option to create a snapshot on interval

Open jzhvymetal opened this issue 4 years ago • 15 comments

I want to use ZoneMinder to monitor for motion. Is there any way to create an option that will create a JPEG snapshot with the same filename at given interval and overwrite each time. This would allow zoneminder to monitor motion file source.

https://wiki.zoneminder.com/How_to_use_ZoneMinder_with_cameras_it_may_not_directly_support

jzhvymetal avatar Nov 15 '20 18:11 jzhvymetal

isn't it easier to install the official RTSP fw and use ZM's ffmpeg source?

beaverdude avatar Nov 18 '20 16:11 beaverdude

isn't it easier to install the official RTSP fw and use ZM's ffmpeg source?

Maybe but that would require RTSP server that would require more resources from the camera and network bandwidth to stream RTSP. Since the NFS is already being utilized creating a snapshot image should not be that resource intensive. Also the RTSP FW never gets any updates.

jzhvymetal avatar Nov 18 '20 19:11 jzhvymetal

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

beaverdude avatar Nov 18 '20 19:11 beaverdude

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

I read the page with google translate but there should already be native way inside the camera because it creates a JPG in the alarm directory on each motion event.

jzhvymetal avatar Nov 19 '20 02:11 jzhvymetal

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

For the life of me I could not get yu2image to compile because of opencv requirements. I did find an alternative and can get 5 fps. Not sure if is truly 5fps but it is updating in zoneminder.

  1. Copy to your WyzeCams NFS the following file https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/blob/master/firmware_mod/bin/avconv

  2. Run the following bash script #!/bin/sh cp /mnt/WyzeCams/avconv /tmp/avconv while : do impdbg --save_pic /tmp/output.nv12 --pic_type NV12 & /tmp/avconv -loglevel quiet -y -f rawvideo -pixel_format nv12 -s 1920x1080 -i /tmp/output.nv12 -vf fps=1 /media/mmc/output.jpg

    sleep 0.1 done

jzhvymetal avatar Nov 21 '20 23:11 jzhvymetal

@jzhvymetal this is really interesting..I've been monitoring the issues here due to the break on the latest firmware and saw your update.

I've been contemplating whether it would be possible to use the on-camera smarts of the wyze to reduce the amount of network traffic and prevent a constant recording stream

I'm basically thinking about whether we can send a hook to software (like zoneminder) when new video is recorded because the camera has determined there is motion. That would allow for off camera local person detection etc. Is that what you're looking at?

Semag avatar Nov 22 '20 18:11 Semag

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

gtxaspec avatar Nov 25 '20 04:11 gtxaspec

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

RTSP is not so easy because the Wyze program on the camera locks the V4l device of the camera so it can not be shared. If you kill the Wyze program it will respawn or reboot the camera. Not sure why no one ever tried compiling V4l loopback so the camera device could be share to another program. That way Avconv, ffmpeg or the RTSP server could share the same device. If there was access to the the V4l device Avconv and ffmpeg has the ability to stream directly to RTSP so nothing else would be required.

jzhvymetal avatar Nov 25 '20 04:11 jzhvymetal

@jzhvymetal / @gtxaspec - just to update this thread, i'm really close to having what i'd like.

The Wyze cam stores self-created snapshots (jpg) in the alarm directory when motion is detected.

I've installed a Deepstack Docker and integrated Deepstack with Home Assistant for Person Detection (https://siytek.com/home-assistant-person-detection/ )

Now the part i'm working on / struggling with is getting a "folder_watcher" integration on home assistant to watch the remote "alarm" nfs folder. As an alternative, i've been thinking this afternoon of trying to mount one of my Home Assistant Samba folders in the NFS server so that the alarm jpgs go to the home assistant server instead.

If I can get the folder_watcher to fire an automation when the jpg is created, then I can pass it to Deepstack and run person detection on the image. This allows me to hit a few key points I believe:

  • Network traffic is kept clean - the main "motion detection" work is done on the camera in the raw method right now (either via pixel changes or with the PIR Motion detector)
  • Once motion is detected, the clip is saved to the NFS and the jpg is written. Deepstack would analyze the JPG and do person /object detection
  • Notification to me would only occur if person / object detection occurs. This allows me to grab all the motion events - and then they could get filtered out by the deepstack person detection.

In my mind, i'm thinking that this both keeps extraneous traffic off my wireless band (with multiple cameras), cuts down on false events, and also keeps a lot of the processing local

Semag avatar Jan 01 '21 22:01 Semag

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

RTSP is not so easy because the Wyze program on the camera locks the V4l device of the camera so it can not be shared. If you kill the Wyze program it will respawn or reboot the camera. Not sure why no one ever tried compiling V4l loopback so the camera device could be share to another program. That way Avconv, ffmpeg or the RTSP server could share the same device. If there was access to the the V4l device Avconv and ffmpeg has the ability to stream directly to RTSP so nothing else would be required.

Do you know of anyone that has been able to create a new /dev/video? for another program to use. I am not really interested in RTSP as it is very unreliable and quirky and delayed. I would more like to stream mjpeg even though it is more intense and heavier on bandwidth, it is usually more real time.

endertable avatar Jan 22 '21 06:01 endertable

Just to give an update to where i'm at here, I currently have the following set up:

  • NFS Mount for recordings
  • Record events only (doesn't utilize network bandwidth unnecessarily)
  • Upon creation of a new detected image (ALARM DIRECTORY) I have a script that monitors that directory and calls a deepstack person detection docker container
  • If a person is detected, it fires an http call to homeassistant via NodeRed
  • I receive a push notification on my phone of the timestamp and camera name and person detected.

This all seems to work pretty smoothly thus far, and allows me to turn off all notifications on the wyze cams (and thus remove almost all false alarms).

Semag avatar Mar 16 '21 16:03 Semag

@Semag Hi this sounds great! Good implementation with very good reasons. I understand all of it except for the “calls a deepstack person detection docker container“. Is this some kind of program suite? Can you elaborate, sounds like something I’d love to try. :). Thanks

endertable avatar Mar 20 '21 12:03 endertable

@endertable -

So i was looking at a way to do person detection locally, kind of as a fun project. I found this (1100 comment thread!!) over at home assistant:

https://community.home-assistant.io/t/face-and-person-detection-with-deepstack-local-and-free/92041

deepstack is a docker container that has object detection built in, it was actually pretty simple to get running, and it is a separate project from home assistant. While I initially tried to get it working within home assistant, i had problems copying files and moving them around etc, so what I ended up doing was building a python script that monitors the folders on my NFS machine.

So, the NFS server is running, it monitors the "alarm" folders for new files, and each time it gets a new file, it pops it over to the deepstack container to run "object detection."

https://deepstack.cc/ <--- you can go there for some docs and some initial information on the docker and how to call it via a quick python script.

I don't have any of my scripts on Github, but i could probably put them up if someone is interested. I did run into a series of false positives for the first time the day before yesteray. It was very windy outside and so my camera was detecting motion every 5 minutes. It just so happened that my trash can and a shadow looked enough like a "person" that it would fire my "person detect" every 5 minutes haha!


This is pretty rudimentary and I know there are issues:

  • Wyze only does a snapshot every 5 mins by default, kind of a bummer if it's windy out.
  • If you get the wyze motion sensor, you get a snapshot every 1 minute which is much better
  • The "alarm" snapshot is a couple seconds after the motion, so usually it has a pretty good picture of the motion. This definitely does not look at the video, only a single image
  • I have not investigated training the docker AI to an alternative use case. I think theoretically I could train it as to the default "vanilla" layout of my cameras so that it could pick out major differences, that would be much better than trying generic object detection. In reality, rather than "person detection" what we are looking for is "hey, this camera looks like XYZ 99% of the time, tell me when it doesn't look like that." Also "If it now starts looking like a spider has spun a web in front of it - you can probably stop alerting me..."

Anyway, those are just some thoughts off the top of my head.

Semag avatar Mar 23 '21 17:03 Semag

Now the part i'm working on / struggling with is getting a "folder_watcher" integration on home assistant to watch the remote "alarm" nfs folder. As an alternative, i've been thinking this afternoon of trying to mount one of my Home Assistant Samba folders in the NFS server so that the alarm jpgs go to the home assistant server instead.

What platform runs your NFS? If it's unix-like then the package inotify-tools will get you what you need I use inotify-wait and the inotify development set for C programs extensively on ubuntu. Works great! If windoze or Freenas, you're out of luck, though. Unlike *ix, hose OS don't have the option of creating event conditions when files change. (Such signal stuff has to be built deep into the filesystem logic and accepted by the kernel.)

delovelady avatar Jun 07 '21 21:06 delovelady

I realize this is six months old, but I didn't see a response that addressed the question dead-on about creating snapshots on an interval. Actually, I've been doing some toying with this, and it turns out that it's fairly simple (as long as your interval is in whole seconds, and no less than 3). If you use the timelapse feature, a record.h264 video will be created immediately and populated with each picture on the fly. ffmpeg can successfully extract each frame into your choice of directories (even though the video's not yet complete) a la:

(In these examples I assume the command is being run in the same directory as the timelapse. Adjust accordingly.)

ffmpeg -i record.h264 image-%04d.jpg (Will create image-0001.jpg, image-0002.jpg, image-0003.jpg, et cetera)

This can also be made a bit smarter, telling ffmpeg to extract certain frames. Here's one way (extracting frames 4 through 8) : ffmpeg -i record.h264 -vf select='between(n,4,8)' -vsync 0 img-%02d.jpg (will create img-01.jpg through img-05.jpg)

Hope that helps.

delovelady avatar Jun 08 '21 03:06 delovelady