rtcbot
rtcbot copied to clipboard
Using RTCBot with picamera encoder
Hi,
after some efforts i have managed to create a fork of aiortc with an example to use the accelerated picamera h264 encoder instead of ffmpeg encoder which allows to minimize the cpu load and increase resolution and framerate. I'll submit a pull request to include my change in aiortc but i'll need people to try use my code and validate it. You can find my fork of aiortc at :
https://github.com/jpiat/aiortc
try the example picam and please submit issues.
This is awesome, could you please give a quick example how I can test it? I could not find any picam references in your fork
i have added a picamera examples. Please give it a try and let me know. I have made a pull request for aiortc with no success so far
Unfortunately I don't personally have time to test this, but I am quite excited about hardware-encoded video in aiortc! I will leave this open for now, since CPU-based encoding is one of the major weaknesses of RTCBot's current code.
I'll integrate with rtcboat but my main concern for now is that my pull request on aiortc still hasn't been reviewed. This mean that any rtcbot version using my aiortc fork won't work with the official aiortc package.
I'll integrate with rtcboat but my main concern for now is that my pull request on aiortc still hasn't been reviewed. This mean that any rtcbot version using my aiortc fork won't work with the official aiortc package.
Nice...Make us happy.... :)
I got he picamera encoder working with aiortc and now in rtcbot. One need to use my for of aiortc and then my fork of rtcbot to test (before making a proper pull request). Streaming 640x480@30fps only takes 8-10% CPU on raspberry pi 4. The example is in examples/streaming/videopi.py
https://github.com/jpiat/aiortc https://github.com/jpiat/rtcbot
one remaining issue is that sometime when starting the stream, it takes some time to get a stable stream if the video does not autoplay.
and it even works with a pi zero ! 640x480@30FPS at 53% CPU, 640x480@15FPS at 33%CPU.
This is great news, I really appreciate the work you put into getting this to work!
Regarding your fork of rtcbot, I briefly looked at the updated code, and it looks like it is directly using picamera to encode h264, and then sending on the video to the connection. I wonder if there is a way to get h264 hardware-encoded without relying on picamera (for example using a USB webcam). That way one could still use rtcbot simply by sending the frames to the connection, no matter where they come from.
I remember that there were possible ways to do this directly, like https://github.com/aiortc/aiortc/issues/323, but it looks like that doesn't actually work for some reason (https://github.com/aiortc/aiortc/issues/366). I have unfortunately not had time to explore things deeply there, and probably won't for several more months.
Nevertheless, I would be happy to merge a modified version of the changes proposed just to get hardware acceleration working ASAP, but it will have to wait until the maintainer of aiortc decides on a path to take regarding support for hardware acceleration/pass-through encoding in the upstream library, since the proposed changes use a modified version of aiortc.
Hi, with what I implemented in aiortc it's also possible to use gstreamer to get encoded frames instead of using picamera. For most SBC there is a gstreamer support for hardware encoding so it could be one way to have hardware encoding platform support for most platforms. The main advantage of using picamera is that it limits the needs to make copies between the user space and the ISP/GPU which allow to get high performance.
I don't like the way I have to implement hardware encoding in aiortc but aiortc decided to rely on pyAV/libavformat for the encoding and libavformat does not allow to use hardware encoding on the pi (omx plugin) for the pi camera. Aiortc embeds the encoding part in the library to make encoding transparent to the user, but if there was a way for the user to provide it's own encoder wrapper, it would make things a lot easier.
I have submitted my pull request to aiortc for quite some time now, and i did not get any feedback.
Le sam. 2 janv. 2021 à 20:47, Daniel Kumor [email protected] a écrit :
This is great news, I really appreciate the work you put into getting this to work!
Regarding your fork of rtcbot, I briefly looked at the updated code, and it looks like it is directly using picamera to encode h264, and then sending on the video to the connection. I wonder if there is a way to get h264 hardware-encoded without relying on picamera (for example using a USB webcam). That way one could still use rtcbot simply by sending the frames to the connection, no matter where they come from.
I remember that there were possible ways to do this directly, like aiortc/aiortc#323 https://github.com/aiortc/aiortc/issues/323, but it looks like that doesn't actually work for some reason (aiortc/aiortc#366 https://github.com/aiortc/aiortc/issues/366). I have unfortunately not had time to explore things deeply there, and probably won't for several more months.
Nevertheless, I would be happy to merge a modified version of the changes proposed just to get hardware acceleration working ASAP, but it will have to wait until the maintainer of aiortc decides on a path to take regarding support for hardware acceleration/pass-through encoding in the upstream library, since the proposed changes use a modified version of aiortc.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/dkumor/rtcbot/issues/21#issuecomment-753521226, or unsubscribe https://github.com/notifications/unsubscribe-auth/AARVMZGNKXJJKRYJA65B5M3SX5Z4ZANCNFSM4TQ4AF4Q .
Hi everyone!
I just wanted to point out that it seems to be possible to use the omx encoder on Raspberry Pi 3/4 with a 32-bit OS at this point. You just need to configure aiortc to prefer h264 over VP8 according to https://github.com/aiortc/aiortc/issues/502
Have not tried it myself yet though, just wanted to mention it.
I'm actually trying to get hardware-accelerated encoding to work the 64-bit OS for Raspberry Pi 4 (for ROS2 support) but as far as I can tell the omx codec is considered deprecated and won't be supported on that OS moving forward as it is replaced by h264_v4l2m2m.
Tried to get that to work but ended up with a segfault.... (see https://github.com/PyAV-Org/PyAV/issues/798)
The rtcbot project was working great and was really easy to use for streaming to and from my laptop, so I'm hoping we can get it performant enough to remote control arm64 based robots :)