mediapipe
mediapipe copied to clipboard
How to set xnnpack num_threads in mediapipe python?
In mediapipe C++, i can set xnnpack num_threads in .pbtxt. For example,
136 # Runs model inference on CPU. 137 node { 138 calculator: "InferenceCalculator" 139 input_side_packet: "MODEL:model" 140 input_stream: "TENSORS:input_tensors" 141 output_stream: "TENSORS:output_tensors" 142 options: { 143 [mediapipe.InferenceCalculatorOptions.ext] { 144 delegate { xnnpack { num_threads: 32 }} 145 #delegate { xnnpack {} } 146 } 147 } 148 }
But how to set it in mediapipe python?
Hi @sureshdagooglecom, can you give me some advices? Thanks!
hi @kuaashish, @sureshdagooglecom , Can you give me some advices?
I'm interested in this as well. Doesn't seem possible to modify with current API?
Hello @mch0dmin You can pass it in the calculator params of InferenceCalculator using Python, however, it is not available to simply "pass" the value. You need to make several changes:
- Depending on the solution you are using, modify the
__init__()
of that solution to accept a new parameter for the xnnpack num_threads value. For example, if using 'FaceMesh' you need to add an argument here - Change the calculator params argument to pass on the new xnnpack num_threads value. It should be something like this (taking FaceMesh example):
calculator_params={
'facedetectionshortrangecpu__facedetectionshortrange__facedetection__TensorsToDetectionsCalculator.min_score_thresh': min_detection_confidence,
'facelandmarkcpu__ThresholdingCalculator.threshold':min_tracking_confidence,
'facedetectionshortrangecpu__facedetectionshortrange__facedetection__InferenceCalculator.xnnpack': {'num_threads': num_threads}
},
- Add
option_value: xnnpack:options/xnnpack
to theInferenceCalculator
in face_detection.pbtxt file - Rebuild the Python package by following the instructions given here
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.