mediapipe
mediapipe copied to clipboard
How to rewrite FaceEffectApp example to pass a ROI to the graph?
Currently, I am using the face_effect_gpu subgraph and code based on the FaceEffectApp example to do face detection for an iOS project. It works great, except in the cases where the face is far away from the camera and occupies a small amount of the image.
We could pre- and post-process the image ourselves, but we are looking for the cleanest solution could any maintainer advise the simplest way to modify the FaceEffectApp example for us to be able to pass a normalized ROI to the graph?
It doesn't seem like the other examples pass a side packet with a NormalizedRect anywhere.
Hi @dennislysenko , Add a new input stream with packet specifying the texture resource path. Also add a private variable to store the current texture (could be empty by design). Remove texture setting from input side packet. Use SyncSetInputStreamHandler for your new calculator in pbtxt, separate original input streams and the new one into two sets.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.