react-native-vision-camera
react-native-vision-camera copied to clipboard
✨ RGB color space for Android
What feature or enhancement are you suggesting?
It would be nice to have RGB option for color space since it would be much easier to implement real-time inference (TFLite) with Frame Processor.
What Platforms whould this feature/enhancement affect?
Android
Alternatives/Workarounds
Currently, I need to convert YUV to RGB manually with RenderScript. However, it has been deprecated since Android 12.
Additional information
- [X] I agree to follow this project's Code of Conduct
- [X] I searched for similar feature requests in this repository and found none.
It would be nice to have RGB option for color space
I'm getting that from the native Android camera. If they don't have it, I can't add it 🤷
Ah, the goods news is that the output format can be set to rgba now since CameraX v1.1.0 https://developer.android.com/reference/androidx/camera/core/ImageAnalysis.Builder#setOutputImageFormat(int)
Hey - unfortunately this worked for some time in V3 (i think 3.0.0) but now doesn't work anymore because YUV is just much more efficient and natively supported.
let's leave this open maybe there's gonna be a future API that solves this
Hey - I recently published vision-camera-resize-plugin, which allows you to really efficiently convert Frames to RGB - this is the same method that CameraX also uses, but much more flexible because you can convert to ARGB, RGB, RGBA, BGRA, etc.
Hey - I recently published vision-camera-resize-plugin, which allows you to really efficiently convert Frames to RGB - this is the same method that CameraX also uses, but much more flexible because you can convert to ARGB, RGB, RGBA, BGRA, etc.
Hi @mrousavy I'm a big fan! I'm struggling with the preprocessing resizing/conversion speeds as well. Is there a way to use the resize plugin and then funnel that result into my own FrameProcessorPlugin? I'm trying to get an ONNX version of ML pipelining working and this would be greatly helpful.
Hey - thanks! :)
Yes, resize first on the JS size, then pass the resulting buffer to your native plugin as a parameter. I can see how this might be a bit confusing since you're not passing the Frame anymore, but instead are passing the array buffer:
useFrameProcessor((frame) => {
'worklet'
const resized = resize(frame, { ... })
const result = myPlugin(frame, { resizedBuffer: resized.buffer })
And on the native side:
override fun callback(frame: Frame, params: Map<String, Any>?): Any? {
// you can ignore `frame` as that's the original frame
// get the resizedBuffer as a SharedArray (which is a ArrayBuffer in JS)
val resizedFrame = params["resizedBuffer"] as? SharedArray