mediapipe
mediapipe copied to clipboard
[LLM][Android]SIGSEGV in libllm_inference_engine_jni.so (offset 0x8b0000) ((anonymous namespace)::start_llm_function
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
Android 13
MediaPipe Tasks SDK version
MEDIAPIPE_FULL_VERSION = "0.10.15"
Task name (e.g. Image classification, Gesture recognition etc.)
genai
Programming Language and version (e.g. C++, Python, Java)
java
Describe the actual behavior
FATAL exception
Describe the expected behaviour
App should not crash
Standalone code/steps you may have used to try to get what you need
1. Clone code
git clone https://github.com/google/mediapipe.git
2. init build env for android
cd mediapipe-v0.10.11 && bash setup_android_sdk_and_ndk.sh
3. Build native lib
bazel build -c opt --config=android_arm64 mediapipe/tasks/java/com/google/mediapipe/tasks/genai:libllm_inference_engine_jni.so
4. Set the model path in the example code
/data/local/tmp/llm/gemma-2b-it-cpu-int4.bin
5. Include libllm_inference_engine_jni.so as part of APK
6. Run the APK on Android device
Other info / Complete Logs
08-10 10:42:28.834 F/DEBUG ( 9523): Cmdline: com.google.mediapipe.examples.llminference
08-10 10:42:28.835 F/DEBUG ( 9523): pid: 9471, tid: 9520, name: DefaultDispatch >>> com.google.mediapipe.examples.llminference <<<
08-10 10:42:28.836 F/DEBUG ( 9523): uid: 10174
08-10 10:42:28.836 F/DEBUG ( 9523): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0000000000000008
08-10 10:42:28.837 F/DEBUG ( 9523): Cause: null pointer dereference
08-10 10:42:28.842 F/DEBUG ( 9523): rax 0000000000000008 rbx 000072d36c653a00 rcx 000072d58921836a rdx 0000000000000000
08-10 10:42:28.843 F/DEBUG ( 9523): r8 000072d589180cb1 r9 0000000000000000 r10 0000000000000008 r11 0000000000000246
08-10 10:42:28.843 F/DEBUG ( 9523): r12 0000000000000000 r13 0000000000000000 r14 000072d279bfbc80 r15 000072d58922cd60
08-10 10:42:28.843 F/DEBUG ( 9523): rdi 000072d36c653a00 rsi 000072d279bfbd68
08-10 10:42:28.844 F/DEBUG ( 9523): rbp 00000000000024ff rsp 000072d279bfbc50 rip 000072d279758ca6
08-10 10:42:28.844 F/DEBUG ( 9523): backtrace:
08-10 10:42:28.844 F/DEBUG ( 9523): #00 pc 000000000034aca6 /data/app/~~sLCcGAJFQd07fiwjxr8PFA==/com.google.mediapipe.examples.llminference-wL3LgTIXOuvjg-j2GAeRLw==/base.apk!libllm_inference_engine_jni.so (offset 0x8b0000) ((anonymous namespace)::start_llm_function(void*)+33) (BuildId: 3b3542f9246276b016b6f98453a02f08)
08-10 10:42:28.845 F/DEBUG ( 9523): #01 pc 00000000000ccd9a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58) (BuildId: f090904cc3ac285a6f190f8003c3eb0e)
08-10 10:42:28.845 F/DEBUG ( 9523): #02 pc 0000000000060d47 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55) (BuildId: f090904cc3ac285a6f190f8003c3eb0e)
08-10 10:42:28.914 E/tombstoned( 213): Tombstone written to: tombstone_08
08-10 10:42:28.926 I/DropBoxManagerService( 598): add tag=data_app_native_crash isTagEnabled=true flags=0x2
08-10 10:42:28.930 W/ActivityTaskManager( 598): Force finishing activity com.google.mediapipe.examples.llminference/.MainActivit
Hi @charles-cloud,
Could you please let us know if you're testing on a physical device or an emulator?
Thank you!!
Hi @kuaashish I have tried on both emulator and physical device. but i got the same error with mediapipe sample code.
Hi @charles-cloud,
Apologies for the request, but could you please confirm if you are making any customizations to the codebase or following the exact steps from our documentation? If possible, could you point us to the specific details?
Thank you!!
Hi @kuaashish
I just tried the sample code mentioned in following link and set the model file path in the InferenceModel.kt file. https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android
git clone https://github.com/google-ai-edge/mediapipe-samples cd mediapipe git sparse-checkout init --cone git sparse-checkout set examples/llm_inference/android
Please share how Java side can send the input text (prompt message) to JNI, I have used generateResponse() API. But not able to find any input text in corresponding native LlmInferenceEngine_Session_PredictAsync() API.
void LlmInferenceEngine_Session_PredictAsync( LlmInferenceEngine_Session* session, void* callback_context, void (callback)(void callback_context, LlmResponseContext* response_context)) {
void* start_llm_function(void* args) { struct LlmInferenceEngineCpu_Session* cpu_session = (struct LlmInferenceEngineCpu_Session*)args;
std::vector
auto status = cpu_session->engine->tokenizer->Encode(cpu_session->prompt, &prompt_ids);
if (!status.ok()) { ABSL_LOG(FATAL) << "Failed to encode input: " << status; }
Hi @charles-cloud,
We have a newer version available 0.10.15. Could you please try it and let us know if you are still experiencing the same behavior.
Thank you!!
Hi @kuaashish
Thank you, Yes, i have already confirmed that JNI API inconsistency is solved on 0.10.15, Again i blocked with https://github.com/google-ai-edge/mediapipe/issues/5600. Could you please help on this issue.
Hi @charles-cloud,
Thank you for the confirmation. We are closing this issue and marking it as resolved internally. We will also review issue #5600 and provide you with an update.