mediapipe icon indicating copy to clipboard operation
mediapipe copied to clipboard

after pod 'onnxruntime-objc' init GestureRecognizer model error,remove onnxruntime-objc is OK

Open fanxiangyang opened this issue 5 months ago • 2 comments

OS Platform and Distribution

iOS 18.4

Compiler version

xcode16.3

Programming Language and version

swift6.0

Installed using virtualenv? pip? Conda?(if python)

No response

MediaPipe version

0.10.21

Bazel version

No response

XCode and Tulsi versions (if iOS)

16.3

Android SDK and NDK versions (if android)

No response

Android AAR (if android)

None

OpenCV version (if running on desktop)

No response

Describe the problem

After importing pod 'onnxruntime-objc', initializing GestureRecognizer model reports an error.

Complete Logs

private func setupHandLandmarker() {
        // 1. 获取模型路径
        guard let modelPath = Bundle.main.path(
            forResource: "gesture_recognizer",
            ofType: "task"
        ) else {
            print("无法找到手部识别模型文件")
            return
        }
        
        // 2. 创建配置
        let options = GestureRecognizerOptions()
        options.baseOptions.modelAssetPath = modelPath
        options.runningMode = .liveStream // 实时模式
        options.numHands = 1 // 最多检测2只手
        options.minHandDetectionConfidence = 0.3 // 检测置信度阈值
        options.minHandPresenceConfidence = 0.3 // 存在置信度阈值
        options.minTrackingConfidence = 0.3 // 跟踪置信度阈值
        // 关键配置:启用GPU加速
        options.baseOptions.delegate = .GPU
        // 3. 设置结果回调
        options.gestureRecognizerLiveStreamDelegate = self
        
        do {
            // 4. 创建检测器实例
            gestureRecognizer = try GestureRecognizer(options: options)
        } catch {
            print("初始化手部检测器失败: \(error)")
        }
    }

WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1749466168.541682 1354818 gesture_recognizer_graph.cc:129] Hand Gesture Recognizer contains CPU only ops. Sets HandGestureRecognizerGraph acceleration to Xnnpack.
I0000 00:00:1749466168.562842 1354818 hand_gesture_recognizer_graph.cc:250] Custom gesture classifier is not defined.
I0000 00:00:1749466168.791911 1354818 gl_context.cc:369] GL version: 3.0 (OpenGL ES 3.0 Metal - 101), renderer: Apple A12 GPU
Initialized TensorFlow Lite runtime.
INFO: Initialized TensorFlow Lite runtime.
Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
failed to create XNNPACK subgraph
ERROR: failed to create XNNPACK subgraph
Node number 272 (TfLiteXNNPackDelegate) failed to prepare.
ERROR: Node number 272 (TfLiteXNNPackDelegate) failed to prepare.
E0000 00:00:1749466168.819437 1354818 calculator_graph.cc:928] INTERNAL: CalculatorGraph::Run() failed: 
Calculator::Open() for node "mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__inferencecalculator__mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__InferenceCalculator" failed: ; RET_CHECK failure (mediapipe/calculators/tensor/inference_interpreter_delegate_runner.cc:298) (interpreter->AllocateTensors())==(kTfLiteOk)
init error: Error Domain=com.google.mediapipe.tasks Code=13 "INTERNAL: CalculatorGraph::Run() failed: 
Calculator::Open() for node "mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__inferencecalculator__mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__InferenceCalculator" failed: ; RET_CHECK failure (mediapipe/calculators/tensor/inference_interpreter_delegate_runner.cc:298) (interpreter->AllocateTensors())==(kTfLiteOk)" UserInfo={NSLocalizedDescription=INTERNAL: CalculatorGraph::Run() failed: 
Calculator::Open() for node "mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__inferencecalculator__mediapipe_tasks_vision_gesture_recognizer_gesturerecognizergraph__mediapipe_tasks_vision_hand_landmarker_handlandmarkergraph__mediapipe_tasks_vision_hand_detector_handdetectorgraph__mediapipe_tasks_core_inferencesubgraph__InferenceCalculator" failed: ; RET_CHECK failure (mediapipe/calculators/tensor/inference_interpreter_delegate_runner.cc:298) (interpreter->AllocateTensors())==(kTfLiteOk)}

fanxiangyang avatar Jun 09 '25 11:06 fanxiangyang

Hi @fanxiangyang,

Can you please let us know the complete steps you are following Or you can point out to documentation? This info will help us to understand the issue and reproduce it if required.

Thank you!!

kuaashish avatar Jun 10 '25 07:06 kuaashish

Here is my complete litter demo. As long as pod 'onnxrruntime objc', '~>1.22.0', initializing the MediaPipeTasksVision gesture model will fail

RubyARKitDemo.zip

fanxiangyang avatar Jun 12 '25 09:06 fanxiangyang