mediapipe
mediapipe copied to clipboard
processor.onNewFrame(bitmap, long): JNI DETECTED ERROR IN APPLICATION: GetStringUTFChars received NULL jstring
I'm seeing the following crash after a call to (the very first frame after app launch) : processor.onNewFrame(bitmap, System.currentTimeMillis());
Integrate pose detection according to documentation.
1,Initialize ‘’‘ try { processor = FrameProcessor(activity,"pose_tracking_gpu.binarypb")
processor?.addPacketCallback("pose_landmarks",object :PacketCallback{
override fun process(packet: Packet?) {
try {
Log.e("aaaa", "process:${PacketGetter.getString(packet)} ", )
} catch (exception: Exception) {
}
}
})
}catch (e:Exception){
e.printStackTrace()
}
‘’‘
2, use Get data in the USB camera callback。 Data is converted to Bitmap via OpencV。 ‘’‘ val cacheBitmap = Bitmap.createBitmap( bgMat.cols(), bgMat.rows(), Bitmap.Config.ARGB_8888 ) Utils.matToBitmap(bgMat, cacheBitmap) processor?.onNewFrame(cacheBitmap,System.currentTimeMillis()) ‘’’ 3,result The following logs are obtained after calling “processor?.onNewFrame(cacheBitmap,System.currentTimeMillis())”
'''
2022-06-21 15:36:22.193 5462-5462/? E/[email protected]: Could not get passthrough implementation for [email protected]::ICameraProvider/legacy/0.
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] JNI DETECTED ERROR IN APPLICATION: GetStringUTFChars received NULL jstring
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] in call to GetStringUTFChars
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] from void com.google.mediapipe.framework.Graph.nativeMovePacketToInputStream(long, java.lang.String, long, long)
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] "Thread-17" prio=5 tid=48 Runnable
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] | group="main" sCount=0 dsCount=0 flags=0 obj=0x12f80110 self=0x8cbe8200
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] | sysTid=5460 nice=0 cgrp=default sched=0/0 handle=0x8aea5970
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] | state=R schedstat=( 281723173 123833911 470 ) utm=19 stm=8 core=3 HZ=100
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] | stack=0x8adaa000-0x8adac000 stackSize=1010KB
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] | held mutexes= "mutator lock"(shared held)
2022-06-21 15:36:22.269 30174-5460/com.elevatorbus.feco A/levatorbus.fec: java_vm_ext.cc:542] native: #00 pc 002e0cf3 /system/lib/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits
'''
Hi @gavel94 , Make sure to set input stream for CPU via frameProcessor.setVideoInputStreamCpu("input_video") where input_video should be the name of the input stream for your graph. That property isn't passed in constructor of FrameProcessor, only the one for GPU is.
After I set “ frameProcessor.setVideoInputStreamCpu("input_video")”. I got the following log。 What other parameters can be set here。 ‘’’ 2022-06-24 14:09:21.771 5027-5649/com.elevatorbus.feco E/FrameProcessor: Mediapipe error: com.google.mediapipe.framework.MediaPipeException: invalid argument: Graph has errors: Packet type mismatch on calculator outputting to stream "input_video": The Packet stores "mediapipe::ImageFrame", but "mediapipe::GpuBuffer" was requested. at com.google.mediapipe.framework.Graph.nativeMovePacketToInputStream(Native Method) at com.google.mediapipe.framework.Graph.addConsumablePacketToInputStream(Graph.java:395) at com.google.mediapipe.components.FrameProcessor.onNewFrame(FrameProcessor.java:511) at com.jiangdg.usbcamera.USBCameraFragment.openUsbCamera$lambda-2(USBCameraFragment.kt:176) at com.jiangdg.usbcamera.USBCameraFragment.lambda$CZkLI_fuImic5WnGI_UKiHM7mo0(Unknown Source:0) at com.jiangdg.usbcamera.-$$Lambda$USBCameraFragment$CZkLI_fuImic5WnGI_UKiHM7mo0.onPreviewResult(Unknown Source:2) at com.serenegiant.usb.common.AbstractUVCCameraHandler$CameraThread$3.onFrame(AbstractUVCCameraHandler.java:826) ‘’’
Hi @gavel94 , Could you provide code changes to investigate further on this issue.
According to the document inheritance, above I already use all the code related to mediapipe. Here is all the code for my module. “”" package com.jiangdg.usbcamera
import android.graphics.Bitmap import com.serenegiant.usb.CameraDialog.CameraDialogParent import com.serenegiant.usb.widget.CameraViewInterface import android.hardware.usb.UsbDevice import android.widget.Toast import com.jiangdg.usbcamera.UVCCameraHelper.OnMyDevConnectListener import android.os.Looper import android.os.Bundle import android.util.Log import android.view.* import android.widget.ArrayAdapter import android.widget.AdapterView import android.widget.ListView import androidx.appcompat.app.AlertDialog import androidx.fragment.app.Fragment import com.google.mediapipe.components.FrameProcessor import com.google.mediapipe.framework.Packet import com.google.mediapipe.framework.PacketCallback import com.google.mediapipe.framework.PacketGetter import com.jiangdg.libusbcamera.R import com.jiangdg.libusbcamera.databinding.FragmentUsbcameraBinding import com.jiangdg.usbcamera.utils.FileUtils import com.serenegiant.usb.USBMonitor import com.serenegiant.utils.LogUtils import org.opencv.android.BaseLoaderCallback import org.opencv.android.LoaderCallbackInterface import org.opencv.android.OpenCVLoader import org.opencv.android.Utils import org.opencv.core.* import org.opencv.imgproc.Imgproc import org.opencv.video.BackgroundSubtractorMOG2 import org.opencv.video.Video import java.util.ArrayList import java.util.stream.Collectors import java.util.stream.Stream import kotlin.math.abs
/**
- UVCCamera use demo
*/ public class USBCameraFragment public constructor() : Fragment(), CameraDialogParent, CameraViewInterface.Callback {
private lateinit var mBinding: FragmentUsbcameraBinding
private lateinit var mCameraHelper: UVCCameraHelper
private var mBaseBitmapFlag = false
private var mDialog: AlertDialog? = null
private var isRequest = false
private var isPreview = false
// private val mCameraWidth = 320 // private val mCameraHeight = 240 private val mCameraWidth = 1920 private val mCameraHeight = 1080 private val uSBDevInfo: List<DeviceInfo>? private get() { if (mCameraHelper == null) return null val devInfos: MutableList<DeviceInfo> = ArrayList() val list = mCameraHelper!!.usbDeviceList for (dev in list) { val info = DeviceInfo() info.pid = dev.vendorId info.vid = dev.productId devInfos.add(info) } return devInfos }
private lateinit var standHist: Mat
private var mLoaderCallback: BaseLoaderCallback? = null
private lateinit var backgroundSubtractorMOG2: BackgroundSubtractorMOG2
private var mStudy = true
private var mReady = false
private val mMinPerimeter = 200
private val mMinContourArea = 1000
private var mDetectCallBack: DetectCallBack? = null
private lateinit var bgMat: Mat
private var processor:FrameProcessor? = null
public fun updateStatus(study: Boolean) {
mStudy = study
}
public fun addDetectCallBack(callBack: DetectCallBack) {
mDetectCallBack = callBack
}
private fun openUsbCamera() {
// step.1 initialize UVCCameraHelper
mBinding.cameraView.setCallback(this)
mCameraHelper = UVCCameraHelper.getInstance(mCameraWidth, mCameraHeight)
mCameraHelper.initUSBMonitor(activity, mBinding.cameraView, listener)
// mCameraHelper.setDefaultFrameFormat(UVCCameraHelper.FRAME_FORMAT_MJPEG) // mBinding.cameraView.setOnClickListener { mStudy = !mStudy showResolutionListDialog() } mCameraHelper.setOnPreviewFrameListener { data ->
val inputMat = Mat(
mCameraHeight + mCameraHeight / 2,
mCameraWidth,
CvType.CV_8UC1
)
inputMat.put(0, 0, data)
val frameMat = Mat()
Imgproc.cvtColor(inputMat, frameMat, Imgproc.COLOR_YUV420p2GRAY)
Log.i("USB", "study = $mStudy ready = $mReady")
if (mStudy || !mReady) {
backgroundSubtractorMOG2.apply(frameMat, bgMat)
if (!mReady){
val contours = arrayListOf<MatOfPoint>()
Imgproc.findContours(
bgMat,
contours,
Mat(),
Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE
)
mReady = contours.size == 0
}
} else {
backgroundSubtractorMOG2.apply(frameMat, bgMat, 0.0)
val contours = arrayListOf<MatOfPoint>()
Imgproc.findContours(
bgMat,
contours,
Mat(),
Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE
)
for (contour: MatOfPoint in contours) {
val matOfPoint2f = MatOfPoint2f()
contour.convertTo(matOfPoint2f, CvType.CV_32F)
val perimeter = Imgproc.arcLength(matOfPoint2f, true)
val contourArea = Imgproc.contourArea(matOfPoint2f, true)
if (perimeter > mMinPerimeter) {
// if (abs(contourArea) > mMinContourArea) { // Log.i("USB", "perimeter = $perimeter") // Log.i("USB", "contourArea = $contourArea") // } // mDetectCallBack?.onHitTheTarget(perimeter, abs(contourArea)) if (null != mDetectCallBack) { if (mDetectCallBack!!.onHitTheTarget(perimeter, abs(contourArea))) { break } } } } }
val cacheBitmap = Bitmap.createBitmap(
bgMat.cols(),
bgMat.rows(),
Bitmap.Config.ARGB_8888
)
Utils.matToBitmap(bgMat, cacheBitmap)
processor?.onNewFrame(cacheBitmap,System.currentTimeMillis())
activity?.runOnUiThread {
mBinding.ivStand.setImageBitmap(cacheBitmap)
}
// if (!mBaseBitmapFlag) { // LogUtils.e("$mCameraWidth $mCameraHeight") // mBaseBitmapFlag = true //// val mat = Mat( //// mCameraHeight + mCameraHeight / 2, //// mCameraWidth, //// CvType.CV_8UC1 //// ) //// mat.put(0,0,data) // // val cacheBitmap = Bitmap.createBitmap( // inputMat.cols(), // inputMat.rows(), // Bitmap.Config.ARGB_8888 // ) // mat2compare(inputMat, standHist) // Utils.matToBitmap(inputMat, cacheBitmap) // activity?.runOnUiThread { // mBinding.ivStand.setImageBitmap(cacheBitmap) // } // // } else { // // val resultMat = Mat() // mat2compare(inputMat, resultMat) // // val correl = // Imgproc.compareHist(standHist.clone(), resultMat, Imgproc.HISTCMP_CORREL) // val chisqr = // Imgproc.compareHist(standHist.clone(), resultMat, Imgproc.HISTCMP_CHISQR) // val intersect = // Imgproc.compareHist(standHist.clone(), resultMat, Imgproc.HISTCMP_INTERSECT) // val bhattacharyya = // Imgproc.compareHist(standHist.clone(), resultMat, Imgproc.HISTCMP_BHATTACHARYYA) // // //// LogUtils.i("compare info correl = $correl chisqr = $chisqr intersect = $intersect bhattacharyya = $bhattacharyya") // activity?.runOnUiThread { // val builder = StringBuilder() // builder.append("correl = $correl") // builder.append(System.lineSeparator()) // builder.append("chisqr = $chisqr") // builder.append(System.lineSeparator()) // builder.append("intersect = $intersect") // builder.append(System.lineSeparator()) // builder.append("bhattacharyya = $bhattacharyya") // mBinding.tvCompare.text = builder // } // }
}
// step.2 register USB event broadcast
if (mCameraHelper != null) {
mCameraHelper.registerUSB()
}
}
fun mat2compare2(inputMat: Mat, resultMat: Mat) {
// var hvsMat = Mat() //灰度 // Imgproc.cvtColor(inputMat, hvsMat, Imgproc.COLOR_YUV2GRAY_UYNV) //直方图计算 Imgproc.calcHist( Stream.of(inputMat).collect(Collectors.toList()), MatOfInt(0), Mat(), resultMat, MatOfInt(255), MatOfFloat(0.0F, 256.0F) ) //图片归一化 Core.normalize(
resultMat,
resultMat,
1.0,
resultMat.rows().toDouble(),
Core.NORM_MINMAX,
-1,
Mat()
)
}
fun mat2compare(mat: Mat, resultMat: Mat) {
var hvsMat = Mat()
var bgrMat = Mat()
// Imgproc.cvtColor(mat, hvsMat, Imgproc.COLOR_YUV420p2GRAY) Imgproc.cvtColor(mat, bgrMat, Imgproc.COLOR_YUV420p2BGR) Imgproc.cvtColor(bgrMat, hvsMat, Imgproc.COLOR_BGR2HSV) // Imgproc.GaussianBlur() //直方图计算 Imgproc.calcHist( Stream.of(hvsMat).collect(Collectors.toList()), MatOfInt(0), Mat(), resultMat, MatOfInt(255), MatOfFloat(0.0F, 256.0F) ) //图片归一化 Core.normalize( resultMat, resultMat, 0.0, resultMat.rows().toDouble(), Core.NORM_MINMAX, -1, Mat() ) }
private fun popCheckDevDialog() {
val infoList = uSBDevInfo
if (infoList == null || infoList.isEmpty()) {
Toast.makeText(activity, "Find devices failed.", Toast.LENGTH_SHORT)
.show()
return
}
val dataList: MutableList<String> = ArrayList()
for (deviceInfo in infoList) {
dataList.add("Device:PID_" + deviceInfo.pid + " & " + "VID_" + deviceInfo.vid)
}
AlertCustomDialog.createSimpleListDialog(
activity,
"Please select USB devcie",
dataList
) { postion -> mCameraHelper.requestPermission(postion) }
}
private val listener: OnMyDevConnectListener = object : OnMyDevConnectListener {
override fun onAttachDev(device: UsbDevice) {
// request open permission
if (!isRequest) {
isRequest = true
popCheckDevDialog()
}
}
override fun onDettachDev(device: UsbDevice) {
// close camera
if (isRequest) {
isRequest = false
mCameraHelper!!.closeCamera()
showShortMsg(device.deviceName + " is out")
}
}
override fun onConnectDev(device: UsbDevice, isConnected: Boolean) {
if (!isConnected) {
showShortMsg("fail to connect,please check resolution params")
isPreview = false
} else {
isPreview = true
showShortMsg("connecting")
// initialize seekbar
// need to wait UVCCamera initialize over
Thread { // try {
// Thread.sleep(100); // } catch (InterruptedException e) { // e.printStackTrace(); // } Looper.prepare() if (mCameraHelper != null && mCameraHelper!!.isCameraOpened) { } Looper.loop() }.start() } }
override fun onDisConnectDev(device: UsbDevice) {
showShortMsg("disconnecting")
}
}
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View {
mBinding = FragmentUsbcameraBinding.inflate(layoutInflater)
initData()
initView()
return mBinding.root
}
private fun initData() {
try {
processor = FrameProcessor(activity,"pose_tracking_gpu.binarypb")
processor?.setVideoInputStreamCpu("input_video")
processor?.addPacketCallback("pose_landmarks",object :PacketCallback{
override fun process(packet: Packet?) {
try {
Log.e("aaaa", "process:${PacketGetter.getString(packet)} ", )
} catch (exception: Exception) {
}
}
})
}catch (e:Exception){
e.printStackTrace()
}
}
private fun initView() {}
override fun onStart() {
super.onStart()
mLoaderCallback = object : BaseLoaderCallback(requireActivity()) {
override fun onManagerConnected(status: Int) {
when (status) {
SUCCESS -> {
LogUtils.i("OpenCV loaded successfully")
standHist = Mat()
backgroundSubtractorMOG2 =
Video.createBackgroundSubtractorMOG2(500, 128.0, true)
bgMat = Mat()
openUsbCamera()
}
else -> {
super.onManagerConnected(status)
}
}
}
}
}
override fun onResume() {
super.onResume()
if (!OpenCVLoader.initDebug()) {
LogUtils.i("Internal OpenCV library not found. Using OpenCV Manager for initialization")
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, requireActivity(), mLoaderCallback)
} else {
LogUtils.i("OpenCV library found inside package. Using it!")
mLoaderCallback?.onManagerConnected(LoaderCallbackInterface.SUCCESS)
}
}
override fun onStop() {
super.onStop()
// step.3 unregister USB event broadcast
if (mCameraHelper != null) {
mCameraHelper.unregisterUSB()
}
}
private fun showResolutionListDialog() {
val builder = AlertDialog.Builder(requireActivity())
val rootView =
LayoutInflater.from(activity).inflate(R.layout.layout_dialog_list, null)
val listView = rootView.findViewById<View>(R.id.listview_dialog) as ListView
val adapter = ArrayAdapter(
requireActivity(),
android.R.layout.simple_list_item_1,
resolutionList
)
if (adapter != null) {
listView.adapter = adapter
}
listView.onItemClickListener =
AdapterView.OnItemClickListener { adapterView, view, position, id ->
if (!mCameraHelper.isCameraOpened) return@OnItemClickListener
val resolution = adapterView.getItemAtPosition(position) as String
val tmp = resolution.split("x").toTypedArray()
if (tmp != null && tmp.size >= 2) {
val widht = Integer.valueOf(tmp[0])
val height = Integer.valueOf(tmp[1])
mCameraHelper.updateResolution(widht, height)
}
mDialog!!.dismiss()
}
builder.setView(rootView)
mDialog = builder.create()
mDialog!!.show()
}
// example: {640x480,320x240,etc}
private val resolutionList: List<String>
private get() {
val list = mCameraHelper.supportedPreviewSizes
var resolutions = mutableListOf<String>()
if (list != null && list.size != 0) {
for (size in list) {
if (size != null) {
resolutions.add(size.width.toString() + "x" + size.height)
}
}
}
return resolutions
}
override fun onDestroy() {
super.onDestroy()
FileUtils.releaseFile()
// step.4 release uvc camera resources
mCameraHelper.release()
}
private fun showShortMsg(msg: String) {
Toast.makeText(requireActivity(), msg, Toast.LENGTH_SHORT).show()
}
override fun getUSBMonitor(): USBMonitor {
return mCameraHelper.usbMonitor
}
override fun onDialogResult(canceled: Boolean) {
if (canceled) {
showShortMsg("取消操作")
}
}
val isCameraOpened: Boolean
get() = mCameraHelper.isCameraOpened
override fun onSurfaceCreated(view: CameraViewInterface, surface: Surface) {
if (!isPreview && mCameraHelper.isCameraOpened) {
mCameraHelper.startPreview(mBinding.cameraView)
isPreview = true
}
}
override fun onSurfaceChanged(
view: CameraViewInterface,
surface: Surface,
width: Int,
height: Int
) {
}
override fun onSurfaceDestroy(view: CameraViewInterface, surface: Surface) {
if (isPreview && mCameraHelper.isCameraOpened) {
mCameraHelper.stopPreview()
isPreview = false
}
}
companion object {
@JvmStatic
fun newInstance(bundle: Bundle) = USBCameraFragment().apply { arguments = bundle }
}
init {
System.loadLibrary("mediapipe_jni")
}
} """
Here is my ARR packaging script。I don't know if this helps with the problem.Thank you very much for your help。 “”” load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar")
mediapipe_aar( name = "mediapipe_pose_tracking", calculators = ["//mediapipe/graphs/pose_tracking:pose_tracking_gpu_deps"], ) “”“
How do I use this method? “FrameProcessor.onNewFrame(final Bitmap bitmap, long timestamp)”
Could you give me an expedited treatment?
If this plan doesn't work, I can consider another one.
Thank you very much.
Hi @gavel94 ,
I think we need to verify the image input that the client is providing.
Can you verify following questions while inputting images.
1)Could you verify that you are using MediaPipe solutions without changes.
2) verify the type required by processor.onNewFrame().
3) verify the contents of "bitmap" in the call to processor.onNewFrame.
During this time, I tried various ways but couldn't achieve my needs.
Hi @gavel94 , I think we need to verify the image input that the client is providing. Can you verify following questions while inputting images. 1)Could you verify that you are using MediaPipe solutions without changes. 2) verify the type required by processor.onNewFrame(). 4) verify the contents of "bitmap" in the call to processor.onNewFrame.
During this time, I tried various ways but couldn't achieve my needs.
For now, I prefer to use Mediapipe to solve the posture problem.
I don't understand how to verify the type and content later.
The bitmap is converted by OpencV and displayed on the screen as a normal preview.
I think the error is indicating that the "stream_name" parameter to nativeMovePacketToInputStream() is bad. This might mean that the instance variable "FrameProcessor.videoInputStream" is not properly initialized through FrameProcessor.addVideoStreams().
I think the error is indicating that the "stream_name" parameter to nativeMovePacketToInputStream() is bad. This might mean that the instance variable "FrameProcessor.videoInputStream" is not properly initialized through FrameProcessor.addVideoStreams(). I can't find any other parameter settings in the project demo。
问题是在使用MediaPipe的Java API时,无法正确加载GPU资源。解决方法是在AndroidManifest.xml文件中添加以下权限:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<uses-feature android:glEsVersion="0x00020000" android:required="true" />
此外,还需要在build.gradle文件中添加以下依赖项:
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.1.1'
implementation 'com.google.android.gms:play-services-mlkit:16.1.1'
implementation 'com.google.protobuf:protobuf-java:3.12.0'
implementation 'com.google.guava:guava:28.2-android'
implementation 'com.google.code.findbugs:jsr305:3.0.2'
implementation 'com.google.android.material:material:1.2.0-alpha05'
implementation 'com.google.android.gms:play-services-vision:20.1.3'
最后,以下是完整的Java代码示例:
import android.Manifest;
import android.content.pm.PackageManager;
import android.os.Bundle;
import android.util.Log;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.camera.core.Camera;
import androidx.camera.core.CameraSelector;
import androidx.camera.core.ImageAnalysis;
import androidx.camera.core.ImageAnalysisConfig;
import androidx.camera.core.ImageProxy;
import androidx.camera.lifecycle.ProcessCameraProvider;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.mediapipe.components.CameraHelper;
import com.google.mediapipe.components.FrameProcessor;
import com.google.mediapipe.formats.proto.LandmarkProto;
import com.google.mediapipe.framework.AndroidAssetUtil;
import com.google.mediapipe.framework.Graph;
import com.google.mediapipe.framework.GraphService;
import com.google.mediapipe.framework.Packet;
import com.google.mediapipe.framework.PacketGetter;
import com.google.mediapipe.framework.TextureFrame;
import com.google.mediapipe.glutil.EglManager;
import com.google.protobuf.InvalidProtocolBufferException;
import java.nio.ByteBuffer;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
public class MainActivity extends AppCompatActivity {
private static final String TAG = "MainActivity";
private static final int REQUEST_CODE_PERMISSIONS = 10;
private static final String[] REQUIRED_PERMISSIONS = new String[]{Manifest.permission.CAMERA};
private ListenableFuture<ProcessCameraProvider> cameraProviderFuture;
private FrameProcessor frameProcessor;
private Executor executor = Executors.newSingleThreadExecutor();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (allPermissionsGranted()) {
startCamera();
} else {
ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
}
// Initialize the FrameProcessor.
frameProcessor = new FrameProcessor(
this,
executor,
"face_detection_front.tflite",
"face_detection_front_labelmap.txt",
4,
4,
true);
frameProcessor.getVideoSurfaceOutput().setFlipY(true);
// Setup a callback for when new frames are available.
frameProcessor.setOnWillAddFrameListener((timestamp) -> {
Log.d(TAG, "onWillAddFrame: " + timestamp);
});
// Add a callback to render the face landmarks.
frameProcessor.addPacketCallback("face_landmarks_with_iris", (packet) -> {
ByteBuffer landmarksData = PacketGetter.getProto(packet, LandmarkProto.NormalizedLandmarkList.parser()).asReadOnlyByteBuffer();
try {
LandmarkProto.NormalizedLandmarkList landmarks = LandmarkProto.NormalizedLandmarkList.parseFrom(landmarksData);
Log.d(TAG, "face landmarks: " + landmarks);
} catch (InvalidProtocolBufferException e) {
Log.e(TAG, "Failed to get face landmarks from packet: " + e);
}
});
// Start the FrameProcessor.
frameProcessor.start();
}
private void startCamera() {
cameraProviderFuture = ProcessCameraProvider.getInstance(this);
cameraProviderFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = cameraProviderFuture.get();
bindPreview(cameraProvider);
} catch (ExecutionException | InterruptedException e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(this));
}
private void bindPreview(@NonNull ProcessCameraProvider cameraProvider) {
ImageAnalysisConfig config = new ImageAnalysisConfig.Builder()
.setTargetResolution(CameraHelper.computeIdealSize(640, 480))
.setLensFacing(CameraSelector.LENS_FACING_FRONT)
.setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
.build();
ImageAnalysis imageAnalysis = new ImageAnalysis(config);
imageAnalysis.setAnalyzer(executor, new ImageAnalysis.Analyzer() {
@Override
public void analyze(@NonNull ImageProxy image) {
// Convert the ImageProxy to a TextureFrame.
TextureFrame textureFrame = new TextureFrame(image.getWidth(), image.getHeight(), TextureFrame.TextureFormat.RGBA);
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
textureFrame.setBuffer(buffer);
// Process the frame with MediaPipe.
frameProcessor.process(textureFrame);
// Close the ImageProxy.
image.close();
}
});
CameraSelector cameraSelector = new CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_FRONT)
.build();
Camera camera = cameraProvider.bindToLifecycle(this, cameraSelector, imageAnalysis);
}
private boolean allPermissionsGranted() {
for (String permission : REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == REQUEST_CODE_PERMISSIONS) {
if (allPermissionsGranted()) {
startCamera();
} else {
finish();
}
}
}
@Override
protected void onDestroy() {
super.onDestroy();
frameProcessor.close();
}
}
Hello @gavel94, We are upgrading the MediaPipe Legacy Solutions to new MediaPipe solutions However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.
You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions which can help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. Thank you
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
This issue was closed due to lack of activity after being marked stale for past 7 days.