react-native-vision-camera
react-native-vision-camera copied to clipboard
💠Correct way to chain frame processors?
Question
I have the output from rn-fast-tflite. What is the correct way to send the data to the next (custom) frame processor? If I do the below, it causes a 4+ second delay even without any code in the filterDetections plugin. If I try to send 'output' from tflite directly I get an unsupported type error.
What I tried
export function filterDetections(frame: Frame, data) {
'worklet'
if (plugin == null) {
throw new Error('Failed to load Frame Processor Plugin!')
}
return plugin.call(frame, data)
}
const frameProcessor = useFrameProcessor(
(frame) => {
'worklet'
if (model.state !== 'loaded') {
return
}
const resized = resize(frame, {
scale: {
width: inputSize,
height: inputSize,
},
pixelFormat: 'rgb',
dataType: dataType,
})
const output = model.model.runSync([resized])
const filteredDetections = filterDetections(frame, { boxes: output[1], masks: output[2] })
},
[model]
)
VisionCamera Version
beta 6
Additional information
- [X] I am using Expo
- [X] I have read the Troubleshooting Guide
- [X] I agree to follow this project's Code of Conduct
- [X] I searched for similar questions in the issues page as well as in the discussions page and found none.
You could parallelize it using runAsync