✨ react-native-webrtc
What feature or enhancement are you suggesting?
Great library, and thanks for the development. I'm wondering if I can use your library with react-native-webrtc. Is there any example available? I've tried to configure the plugins, but I understood very little. Thanks!
What Platforms whould this feature/enhancement affect?
iOS
Alternatives/Workarounds
no workaround
Additional information
- [X] I agree to follow this project's Code of Conduct
- [X] I searched for similar feature requests in this repository and found none.
What's the use case exactly? I.e., why do you need vision-camera to work with webrtc?
I want to use vision-camera alongside react-native-webrtc for two reasons: 1) to capture the image every 60 seconds, 2) to always maintain focus on the face.
react-native-vision-camera is probably a great library to use for both of those:
- capture every 60 seconds -- seems like you could just run a timer, like a
setTimeout()? - maintain focus on the face -- maybe this combo --
- this plugin frame processor for face detection -- https://github.com/rodgomesc/vision-camera-face-detector
- Use it to pull the location of the face (bounds) -- https://github.com/rodgomesc/vision-camera-face-detector/blob/master/src/index.ts
- set the focus of the camera to that location -- https://react-native-vision-camera.com/docs/guides/focusing
Check out the example app: https://github.com/mrousavy/react-native-vision-camera/tree/main/package/example And the docs site: https://react-native-vision-camera.com/docs/guides/
I'm still not clear why you need WebRTC. If you are trying to just send the 'every 60 seconds" photos over WebRTC, then I think you don't need anything special. You could just pull the photo and send it via react-native-webrtc. Here's how you get the photo: https://react-native-vision-camera.com/docs/guides/taking-photos#getting-the-photos-data and you'd probably just open a data channel to send it to the peer.
If you're trying to make react-native-vision-camera an available camera in react-native-webrtc, that's probably an ask to the react-native-webrtc folks.
Please consider sharing whatever you end up building in the open source community, through blog posts or tweets. Personally, I'm super interested to see what you do here! 🎉
(I think this issue can be closed.)
Sure, here's a response you could use:
Thanks for the detailed information and the suggestions provided!
We're currently working on a video conferencing app using react-native-webrtc for streaming and viewing. Our requirement involves capturing the local camera feed every 60 seconds, which led us to consider your solution, react-native-vision-camera. We've found that react-native-webrtc doesn't inherently support this feature.
Additionally, another essential aspect for us is the ability to detect if the user is not showing their face during the call.
We'll explore the integration of the suggested frame processor for face detection from vision-camera-face-detector and see if it aligns with our requirements.
Regarding WebRTC, you mentioned a possible approach to send the photos over react-native-webrtc, which seems feasible based on the documentation you provided. However, our main focus currently lies in capturing the camera feed periodically and detecting if the user's face is visible.
We appreciate your guidance and will definitely consider sharing our progress and findings with the open-source community through blog posts or tweets. Your interest in our project is encouraging! 🎉 Can we make react-native-vision-camera work together with react-native-webrtc? If so, how?
Thanks again for your help!
Yup WebRTC could easily make it into a Frame Processor Plugin.
It's quite simple to build honestly. If you want me to build this, consider contracting me thru my agency https://margelo.io
@mrousavy I wrote to you via site-contact, let me know. Thanks.