mediapipe icon indicating copy to clipboard operation
mediapipe copied to clipboard

android holistic.aar refine_face_landmarks side packet takes no effection

Open sunmin89 opened this issue 2 years ago • 11 comments

Please make sure that this is a solution issue.

System information (Please provide as much relevant information as possible)

  • Have I written custom code (No):
  • OS Platform and Distribution (e.g., Ubuntu18.04 Win10 WSL):
  • MediaPipe version: In fact,i downloaded the latest code(April 15 2022)
  • Bazel version:5.0.0
  • Solution (e.g. Holistic):
  • Programming Language and version ( e.g. Java):

Describe the expected behavior: I want to get the 478 face landmarks through .aar within an android project.

I did three things:

1.download the latest code and unzip them to windows wsl environment

2.make a directory and create a build file:

mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_holistic

''' load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar")

mediapipe_aar( name = "aar_holistic", calculators = ["//mediapipe/graphs/holistic_tracking:holistic_tracking_gpu_deps"], ) cc_library( name = "mediapipe_jni_lib", srcs = [":libmediapipe_jni.so"], alwayslink = 1, ) '''

3.modify holistic_landmark_gpu.pbtxt git diff mediapipe/modules/holistic_landmark/holistic_landmark_gpu.pbtxt diff --git a/mediapipe/modules/holistic_landmark/holistic_landmark_gpu.pbtxt b/mediapipe/modules/holistic_landmark/holistic_landmark_gpu.pbtxt index 33ed880..8845695 100644 --- a/mediapipe/modules/holistic_landmark/holistic_landmark_gpu.pbtxt +++ b/mediapipe/modules/holistic_landmark/holistic_landmark_gpu.pbtxt @@ -99,6 +99,18 @@ output_stream: "SEGMENTATION_MASK:segmentation_mask" output_stream: "POSE_ROI:pose_landmarks_roi" output_stream: "POSE_DETECTION:pose_detection"

+node {

  • calculator: "ConstantSidePacketCalculator"
  • output_side_packet: "PACKET:0:enable_segmentation"
  • output_side_packet: "PACKET:1:refine_face_landmarks"
  • node_options: {
  •  packet { bool_value: true }
    
  •  packet { bool_value: true }
    
  • }
  • } +}

// Predicts pose landmarks. node { calculator: "PoseLandmarkGpu" @@ -107,6 +119,7 @@ node { input_side_packet: "SMOOTH_LANDMARKS:smooth_landmarks" input_side_packet: "ENABLE_SEGMENTATION:enable_segmentation" input_side_packet: "SMOOTH_SEGMENTATION:smooth_segmentation"

  • input_side_packet: "REFINE_FACE_LANDMARKS:refine_face_landmarks" input_side_packet: "USE_PREV_LANDMARKS:use_prev_landmarks" output_stream: "LANDMARKS:pose_landmarks" output_stream: "WORLD_LANDMARKS:pose_world_landmarks"

4.compile the aar bazel build --local_cpu_resources=HOST_CPUS-2 -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_holistic:aar_holistic

compile the binary graph bazel build -c opt mediapipe/graphs/holistic_tracking:holistic_tracking_gpu

5.copy the aar_holistic.aar and holistic_tracking_gpu.binarypb to my android project 6. add a side packet to the processor
Map<String, Packet> inputSidePackets = new HashMap<>(); inputSidePackets.put("refine_face_landmarks", packetCreator.createBool(true)); processor.setInputSidePackets(inputSidePackets);

7.add packet callback of the face_landmarks byte[] landmarksRaw = PacketGetter.getProtoBytes(packet); LandmarkProto.LandmarkList landmarkTmp = LandmarkProto.LandmarkList.parseFrom(landmarksRaw); Log.i(TAG, "onCreate: face landmark len " + landmarkTmp.getLandmarkCount());

I expect to get 478 landmarks including iris,but it returns only 468

Standalone code you may have used to try to get what you need :

If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/repo link /any notebook:

Other info / Complete Logs : Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached:

sunmin89 avatar Apr 27 '22 09:04 sunmin89

Hi @summer-dev , As per this documentation ,hoilstic model can determine only 468 landmarks.

sureshdagooglecom avatar Apr 28 '22 11:04 sureshdagooglecom

Thanks for your reply.

However,below info imply iris feature have been added to the holistic solution.

https://github.com/google/mediapipe/issues/1546 https://github.com/google/mediapipe/issues/1444#issuecomment-877975005

sunmin89 avatar Apr 29 '22 01:04 sunmin89

Besides,the holistic python solution already returns 478 face landmarks with iris,why this features is unavailable within android(java) solution? with mp_holistic.Holistic( static_image_mode=True, model_complexity=2, enable_segmentation=True, refine_face_landmarks=True) as holistic:

sunmin89 avatar Apr 29 '22 01:04 sunmin89

Hi @summer-dev , We Consider this as feature request and will share with internal team.

sureshdagooglecom avatar May 04 '22 11:05 sureshdagooglecom

@sureshdagooglecom Dear sureshdagooglecom Below content is from my graph mediapipe/graphs/holistic_tracking/holistic_tracking_gpu.pbtxt

# Tracks and renders pose + hands + face landmarks.

# GPU buffer. (GpuBuffer)
input_stream: "input_video"

# GPU image with rendered results. (GpuBuffer)
output_stream: "output_video"

input_side_packet: "refine_face_landmarks"

# Throttles the images flowing downstream for flow control. It passes through
# the very first incoming image unaltered, and waits for downstream nodes
# (calculators and subgraphs) in the graph to finish their tasks before it
# passes through another image. All images that come in while waiting are
# dropped, limiting the number of in-flight images in most part of the graph to
# 1. This prevents the downstream nodes from queuing up incoming images and data
# excessively, which leads to increased latency and memory usage, unwanted in
# real-time mobile applications. It also eliminates unnecessarily computation,
# e.g., the output produced by a node may get dropped downstream if the
# subsequent nodes are still busy processing previous inputs.
node {
  calculator: "FlowLimiterCalculator"
  input_stream: "input_video"
  input_stream: "FINISHED:output_video"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_input_video"
  node_options: {
    [type.googleapis.com/mediapipe.FlowLimiterCalculatorOptions] {
      max_in_flight: 1
      max_in_queue: 1
      # Timeout is disabled (set to 0) as first frame processing can take more
      # than 1 second.
      in_flight_timeout: 0
    }
  }
}

node {
  calculator: "HolisticLandmarkGpu"
  input_stream: "IMAGE:throttled_input_video"
  input_side_packet: "REFINE_FACE_LANDMARKS:refine_face_landmarks"
  output_stream: "POSE_LANDMARKS:pose_landmarks"
  output_stream: "WORLD_LANDMARKS:pose_world_landmarks"
  output_stream: "POSE_ROI:pose_roi"
  output_stream: "POSE_DETECTION:pose_detection"
  output_stream: "FACE_LANDMARKS:face_landmarks"
  output_stream: "LEFT_HAND_LANDMARKS:left_hand_landmarks"
  output_stream: "RIGHT_HAND_LANDMARKS:right_hand_landmarks"
}

# Gets image size.
node {
  calculator: "ImagePropertiesCalculator"
  input_stream: "IMAGE_GPU:throttled_input_video"
  output_stream: "SIZE:image_size"
}

# Converts pose, hands and face landmarks to a render data vector.
node {
  calculator: "HolisticTrackingToRenderData"
  input_stream: "IMAGE_SIZE:image_size"
  input_stream: "POSE_LANDMARKS:pose_landmarks"
  input_stream: "POSE_ROI:pose_roi"
  input_stream: "LEFT_HAND_LANDMARKS:left_hand_landmarks"
  input_stream: "RIGHT_HAND_LANDMARKS:right_hand_landmarks"
  input_stream: "FACE_LANDMARKS:face_landmarks"
  output_stream: "RENDER_DATA_VECTOR:render_data_vector"
}

# Draws annotations and overlays them on top of the input images.
node {
  calculator: "AnnotationOverlayCalculator"
  input_stream: "IMAGE_GPU:throttled_input_video"
  input_stream: "VECTOR:render_data_vector"
  output_stream: "IMAGE_GPU:output_video"
}

here is my build file:

load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar")

mediapipe_aar(
name = "aar_holistic",
    calculators = ["//mediapipe/graphs/holistic_tracking:holistic_tracking_gpu_deps"],
)
cc_library(
    name = "mediapipe_jni_lib",
    srcs = [":libmediapipe_jni.so"],
    alwayslink = 1,
)

When i add the refine_face_landmarks to the processor

 Map<String, Packet> inputSidePackets = new HashMap<>();
 inputSidePackets.put("refine_face_landmarks",packetCreator.createBool(true));
 processor.setInputSidePackets(inputSidePackets);

mediapipe framework crashes and it complains:

2022-03-20 22:38:54.406 1734-2158/cn.nubia.redmagickyi E/FrameProcessor: Mediapipe error: 
    com.google.mediapipe.framework.MediaPipeException: unknown: Graph has errors: 
    Calculator::Open() for node "holisticlandmarkgpu__handlandmarksleftandrightgpu__handlandmarksfromposegpu_2__handlandmarkgpu__handlandmarkmodelloader__LocalFileContentsCalculator" failed: Failed to read file
        at com.google.mediapipe.framework.Graph.nativeMovePacketToInputStream(Native Method)
        at com.google.mediapipe.framework.Graph.addConsumablePacketToInputStream(Graph.java:395)
        at com.google.mediapipe.components.FrameProcessor.onNewFrame(FrameProcessor.java:458)
        at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.renderNext(ExternalTextureConverter.java:425)
        at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.lambda$onFrameAvailable$0$ExternalTextureConverter$RenderThread(ExternalTextureConverter.java:360)
        at com.google.mediapipe.components.-$$Lambda$ExternalTextureConverter$RenderThread$Y1vV_XyLsWZ0ebOvq-iwjQ0H3Sw.run(Unknown Source:4)
        at android.os.Handler.handleCallback(Handler.java:938)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loopOnce(Looper.java:238)
        at android.os.Looper.loop(Looper.java:379)
        at com.google.mediapipe.glutil.GlThread.run(GlThread.java:141)

sunmin89 avatar May 09 '22 06:05 sunmin89

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] avatar May 16 '22 06:05 google-ml-butler[bot]

Has it been solved

390057892 avatar May 24 '22 03:05 390057892

Closing as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] avatar May 31 '22 03:05 google-ml-butler[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar May 31 '22 03:05 google-ml-butler[bot]

Closing as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] avatar Jun 07 '22 04:06 google-ml-butler[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar Jun 07 '22 04:06 google-ml-butler[bot]

Hello @sunmin89, We are upgrading the MediaPipe Legacy Solutions to new MediaPipe solutions However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.

You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions which can help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. Thank you

kuaashish avatar Apr 26 '23 11:04 kuaashish

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] avatar May 04 '23 01:05 github-actions[bot]

This issue was closed due to lack of activity after being marked stale for past 7 days.

github-actions[bot] avatar May 11 '23 01:05 github-actions[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar May 11 '23 01:05 google-ml-butler[bot]