mediapipe
mediapipe copied to clipboard
FaceDetector `create_from_options` logs `GPU suport is not available`
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
OS Platform and Distribution
Ubuntu 22.1.0
MediaPipe Tasks SDK version
0.10.9
Task name (e.g. Image classification, Gesture recognition etc.)
Face Detector
Programming Language and version (e.g. C++, Python, Java)
Python
Describe the actual behavior
I get tons of "I0000 00:00:1706896791.906958 11 task_runner.cc:85] GPU suport is not available: INTERNAL: ; RET_CHECK failure (mediapipe/gpu/gl_context_egl.cc:77) display != EGL_NO_DISPLAYeglGetDisplay() returned error 0x300c"
Describe the expected behaviour
No logs at all
Standalone code/steps you may have used to try to get what you need
curl https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite > detector.tflite
base_options = python.BaseOptions(
model_asset_path="./detector.tflite",
delegate=python.BaseOptions.Delegate.CPU,
)
options = vision.FaceDetectorOptions(base_options=base_options)
# Logs seem to come from here
detector = vision.FaceDetector.create_from_options(options)
Other info / Complete Logs
E0000 00:00:1706896791.906821 11 gl_context.cc:408] INTERNAL: ; RET_CHECK failure (mediapipe/gpu/gl_context_egl.cc:303) successeglMakeCurrent() returned error 0x3008; (entering GL context)
I0000 00:00:1706896791.906958 11 task_runner.cc:85] GPU suport is not available: INTERNAL: ; RET_CHECK failure (mediapipe/gpu/gl_context_egl.cc:77) display != EGL_NO_DISPLAYeglGetDisplay() returned error 0x300c
Hi @ArturFortunato,
Are you using the Colab example from our Documentation here? If so, could you try the example from the attached gist here? I tested it on both CPU(gist) and GPU(gist), and it worked without any errors. Please let us know if it is working for you. If not, please share the steps or the full code so we can figure out the issue from our end.
Thank you!
Thank you for your quick answer @kuaashish. Yes, I've tried that and t doesn't work. My code is literally this:
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
base_options = python.BaseOptions(
model_asset_path="./detector.tflite",
delegate=python.BaseOptions.Delegate.CPU,
)
print("---Number 1", flush=True)
options = vision.FaceDetectorOptions(base_options=base_options)
print("---Number 2", flush=True)
detector = vision.FaceDetector.create_from_options(options)
print("---Number 3", flush=True)
And the execution yields this
Note that this is running in an AWS EC2 instance (not sure if relevant)
Is there anything obvious that I'm missing?
Hi @ArturFortunato,
Could you please confirm whether you intend to run the demo on the GPU or CPU? Based on the "delegate=python.BaseOptions.Delegate.CPU" attribute values and the logs you've provided, it seems like you prefer to run it on the CPU. The warnings you mentioned regarding GPU-related files can be ignored, as the output message indicates success.
If you wish to hide or remove these warnings, unfortunately, it is currently not possible. We have noted this as a feature request under issue numbers https://github.com/google/mediapipe/issues/4991 and https://github.com/google/mediapipe/issues/4944. However, we cannot guarantee a specific implementation date at this time.
Thank you!!
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
This issue was closed due to lack of activity after being marked stale for past 7 days.