mediapipe
mediapipe copied to clipboard
error with `free(): double free detected in tcache 2` on `mediapipe.tasks.python.vision.FaceLandmarker.create_from_options()`
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
OS Platform and Distribution
Archlinux
MediaPipe Tasks SDK version
0.10.13
Task name (e.g. Image classification, Gesture recognition etc.)
FaceLandmarker/ FaceMesh (legacy solution also)
Programming Language and version (e.g. C++, Python, Java)
Python 3.11
Describe the actual behavior
While running the example code from Google colab, while executing "mediapipe.tasks.python.vision.FaceLandmarker.create_from_options()" "free(): double free detected in tcache 2" occurs
Describe the expected behaviour
mediapipe.tasks.python.vision.FaceLandmarker.create_from_options() create an FaceLandmarker object
Standalone code/steps you may have used to try to get what you need
NOTE:
- This is error also with, mediapipe.solutions.face_mesh.FaceMesh()
- when i try run code from google colab, code complete successful
- Run on system with nouveau, code complete successful
Steps:
-
Create venv, download mediapipe
-
Copied code from google colab (with model and image): https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/face_landmarker/python/%5BMediaPipe_Python_Tasks%5D_Face_Landmarker.ipynb
-
run and on
mediapipe.tasks.python.vision.FaceLandmarker.create_from_options()get:
free(): double free detected in tcache 2
[1] 11412 IOT instruction (core dumped) /home/vlad/bin/clipsai-project/.venv/bin/python
Copied code:
from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
import numpy as np
import matplotlib.pyplot as plt
# STEP 1: Import the necessary modules.
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
def draw_landmarks_on_image(rgb_image, detection_result):
face_landmarks_list = detection_result.face_landmarks
annotated_image = np.copy(rgb_image)
# Loop through the detected faces to visualize.
for idx in range(len(face_landmarks_list)):
face_landmarks = face_landmarks_list[idx]
# Draw the face landmarks.
face_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
face_landmarks_proto.landmark.extend([
landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in face_landmarks
])
solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_TESSELATION,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_tesselation_style())
solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_CONTOURS,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_contours_style())
solutions.drawing_utils.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks_proto,
connections=mp.solutions.face_mesh.FACEMESH_IRISES,
landmark_drawing_spec=None,
connection_drawing_spec=mp.solutions.drawing_styles
.get_default_face_mesh_iris_connections_style())
return annotated_image
def plot_face_blendshapes_bar_graph(face_blendshapes):
# Extract the face blendshapes category names and scores.
face_blendshapes_names = [face_blendshapes_category.category_name for face_blendshapes_category in face_blendshapes]
face_blendshapes_scores = [face_blendshapes_category.score for face_blendshapes_category in face_blendshapes]
# The blendshapes are ordered in decreasing score value.
face_blendshapes_ranks = range(len(face_blendshapes_names))
fig, ax = plt.subplots(figsize=(12, 12))
bar = ax.barh(face_blendshapes_ranks, face_blendshapes_scores, label=[str(x) for x in face_blendshapes_ranks])
ax.set_yticks(face_blendshapes_ranks, face_blendshapes_names)
ax.invert_yaxis()
# Label each bar with values
for score, patch in zip(face_blendshapes_scores, bar.patches):
plt.text(patch.get_x() + patch.get_width(), patch.get_y(), f"{score:.4f}", va="top")
ax.set_xlabel('Score')
ax.set_title("Face Blendshapes")
plt.tight_layout()
plt.show()
# STEP 2: Create an FaceLandmarker object.
base_options = python.BaseOptions(model_asset_path='face_landmarker_v2_with_blendshapes.task')
options = vision.FaceLandmarkerOptions(base_options=base_options,
output_face_blendshapes=True,
output_facial_transformation_matrixes=True,
num_faces=1)
detector = vision.FaceLandmarker.create_from_options(options)
# STEP 3: Load the input image.
image = mp.Image.create_from_file("image.png")
# STEP 4: Detect face landmarks from the input image.
detection_result = detector.detect(image)
# STEP 5: Process the detection result. In this case, visualize it.
annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
#cv2_imshow(cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))
Other info / Complete Logs
Output log:
2024-05-05 12:11:54.948127: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-05 12:11:55.896677: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
free(): double free detected in tcache 2
[1] 11412 IOT instruction (core dumped) /home/vlad/bin/clipsai-project/.venv/bin/python
CPU: 6-core AMD Ryzen 5 3600 (-MT MCP-)
inxi -G:
Graphics:
Device-1: NVIDIA TU106 [GeForce RTX 2070] driver: nvidia v: 550.78
Display: wayland server: X.org v: 1.21.1.13 with: Xwayland v: 21.1.99
compositor: kwin_wayland driver: X: loaded: nvidia unloaded: modesetting
gpu: nvidia resolution: 1920x1080
API: EGL v: 1.5 drivers: nvidia platforms: gbm,wayland
API: OpenGL v: 4.6.0 vendor: nvidia v: 550.78 renderer: NVIDIA GeForce
RTX 2070/PCIe/SSE2
API: Vulkan v: 1.3.279 drivers: nvidia surfaces: xcb,xlib,wayland
python -m pip list:
Package Version
---------------------------- -----------
absl-py 2.1.0
astunparse 1.6.3
attrs 23.2.0
certifi 2024.2.2
cffi 1.16.0
charset-normalizer 3.3.2
contourpy 1.2.1
cycler 0.12.1
flatbuffers 24.3.25
fonttools 4.51.0
gast 0.5.4
google-pasta 0.2.0
grpcio 1.63.0
h5py 3.11.0
idna 3.7
jax 0.4.26
jaxlib 0.4.26
keras 3.3.3
kiwisolver 1.4.5
libclang 18.1.1
Markdown 3.6
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.8.4
mdurl 0.1.2
mediapipe 0.10.13
ml-dtypes 0.3.2
namex 0.0.8
numpy 1.26.4
nvidia-cublas-cu12 12.3.4.1
nvidia-cuda-cupti-cu12 12.3.101
nvidia-cuda-nvcc-cu12 12.3.107
nvidia-cuda-nvrtc-cu12 12.3.107
nvidia-cuda-runtime-cu12 12.3.101
nvidia-cudnn-cu12 8.9.7.29
nvidia-cufft-cu12 11.0.12.1
nvidia-curand-cu12 10.3.4.107
nvidia-cusolver-cu12 11.5.4.101
nvidia-cusparse-cu12 12.2.0.103
nvidia-nccl-cu12 2.19.3
nvidia-nvjitlink-cu12 12.3.101
opencv-contrib-python 4.9.0.80
opt-einsum 3.3.0
optree 0.11.0
packaging 24.0
pillow 10.3.0
pip 24.0
protobuf 4.25.3
pycparser 2.22
Pygments 2.18.0
pyparsing 3.1.2
python-dateutil 2.9.0.post0
requests 2.31.0
rich 13.7.1
scipy 1.13.0
setuptools 65.5.0
six 1.16.0
sounddevice 0.4.6
tensorboard 2.16.2
tensorboard-data-server 0.7.2
tensorflow 2.16.1
tensorflow-io-gcs-filesystem 0.37.0
tensorrt 10.0.1
tensorrt-cu12 10.0.1
termcolor 2.4.0
typing_extensions 4.11.0
urllib3 2.2.1
Werkzeug 3.0.2
wheel 0.43.0
wrapt 1.16.0