MixedReality-WebRTC
MixedReality-WebRTC copied to clipboard
Is it possible to get control of the frame (ie:- produced by LocalVideoTrack) that is sent to remote ?
Currently, I'm using DeviceVideoTrackSource and LocalVideoTrack to create source for web cam and in turn for local video track. I want apply some blur effects to the web cam feeds before sending to the remote peer. Is it possible to get access to the frame produced by webcam that is sent to remote peer?
Generally yes.
The DeviceVideoTrackSource is a pretty optimised track source which appears to capture and send frames all within the native library. I'm not aware of any way to "post process" these frames when using DeviceVideoTrackSource.
However, you could create a custom Video Track Source and capture the camera yourself. Process the frames however you like, then send them into WebRTC. Do bear in mind that if you capture/process frames yourself, it will likely not be as fast, especially if you implement in managed code. Depending on your target platform (e.g. PC, Mobile, HoloLens etc), you might want to consider implementing this in native code if performance is sensitive.
Otherwise, if someone is aware of a way to intercept the DeviceVideoTrackSource frames, I would love to know too!
Thanks for replying @spacecheeserocks . Can you give a glimpse on how can I send the frames to webrtc by creating a custom video track source?. I’ve less idea on that
Thanks for replying @spacecheeserocks . Can you give a glimpse on how can I send the frames to webrtc by creating a custom video track source?. I’ve less idea on that
Sorry, I don't have a concrete example of this to hand, nor can I exactly remember the name of the class.
The approach is slightly different depending on if you are using Unity or the pure C# library.
Have a dig around all of the available implementations of VideoTrackSource etc. If I recall correctly, there is a class called something like CustomVideoTrackSource, which you just implement a callback (like I420VideoFrameReady).
If not, you can make your own custom class that derives/implements VideoTrackSource.
Either approach generally involves generating a feed of frames, and then passing those frames whenever the I420VideoFrame callback is fired.
If you are planning to run on UWP, only the I420 frame callbacks work (don't bother with the ARGB callbacks). This is documented somewhere if I recall, but it's a bit hidden.
Converting RGB to I420 is tricky, but if you're lucky, your video capture source will already be able to provide I420 frames.