react-native-vision-camera icon indicating copy to clipboard operation
react-native-vision-camera copied to clipboard

✨ RGB color space for Android

Open ben186 opened this issue 2 years ago • 2 comments

What feature or enhancement are you suggesting?

It would be nice to have RGB option for color space since it would be much easier to implement real-time inference (TFLite) with Frame Processor.

What Platforms whould this feature/enhancement affect?

Android

Alternatives/Workarounds

Currently, I need to convert YUV to RGB manually with RenderScript. However, it has been deprecated since Android 12.

Additional information

ben186 avatar Jun 03 '22 16:06 ben186

It would be nice to have RGB option for color space

I'm getting that from the native Android camera. If they don't have it, I can't add it 🤷

mrousavy avatar Jun 13 '22 08:06 mrousavy

Ah, the goods news is that the output format can be set to rgba now since CameraX v1.1.0 https://developer.android.com/reference/androidx/camera/core/ImageAnalysis.Builder#setOutputImageFormat(int)

ben186 avatar Jun 13 '22 09:06 ben186

Hey - unfortunately this worked for some time in V3 (i think 3.0.0) but now doesn't work anymore because YUV is just much more efficient and natively supported.

mrousavy avatar Sep 30 '23 09:09 mrousavy

let's leave this open maybe there's gonna be a future API that solves this

mrousavy avatar Sep 30 '23 09:09 mrousavy

Hey - I recently published vision-camera-resize-plugin, which allows you to really efficiently convert Frames to RGB - this is the same method that CameraX also uses, but much more flexible because you can convert to ARGB, RGB, RGBA, BGRA, etc.

mrousavy avatar Jan 30 '24 15:01 mrousavy

Hey - I recently published vision-camera-resize-plugin, which allows you to really efficiently convert Frames to RGB - this is the same method that CameraX also uses, but much more flexible because you can convert to ARGB, RGB, RGBA, BGRA, etc.

Hi @mrousavy I'm a big fan! I'm struggling with the preprocessing resizing/conversion speeds as well. Is there a way to use the resize plugin and then funnel that result into my own FrameProcessorPlugin? I'm trying to get an ONNX version of ML pipelining working and this would be greatly helpful.

moghtader avatar Jan 30 '24 22:01 moghtader

Hey - thanks! :)

Yes, resize first on the JS size, then pass the resulting buffer to your native plugin as a parameter. I can see how this might be a bit confusing since you're not passing the Frame anymore, but instead are passing the array buffer:

useFrameProcessor((frame) => {
  'worklet'

  const resized = resize(frame, { ... })

  const result = myPlugin(frame, { resizedBuffer: resized.buffer })

And on the native side:

override fun callback(frame: Frame, params: Map<String, Any>?): Any? {
  // you can ignore `frame` as that's the original frame

  // get the resizedBuffer as a SharedArray (which is a ArrayBuffer in JS)
  val resizedFrame = params["resizedBuffer"] as? SharedArray

mrousavy avatar Jan 31 '24 09:01 mrousavy