mediapipe
mediapipe copied to clipboard
Failed to load LlmInference
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
Android
MediaPipe Tasks SDK version
0.10.24
Task name (e.g. Image classification, Gesture recognition etc.)
llm inference
Programming Language and version (e.g. C++, Python, Java)
java
Describe the actual behavior
When I try to load LlmInference via LlmInference.createFromOptions(context, options) , The exception occured with the message below. same model is working on ai-edge-gallery app.
Describe the expected behaviour
The model should be loaded properly.
Standalone code/steps you may have used to try to get what you need
val options = LlmInference.LlmInferenceOptions.builder().setModelPath("/sdcard/llm/gemma3-1b-it-int4.task") .setMaxTokens(64).setPreferredBackend(LlmInference.Backend.CPU) .build() // Create an instance of the LLM Inference task and session. try { val llmInference = LlmInference.createFromOptions(context, options) val session = LlmInferenceSession.createFromOptions( llmInference, LlmInferenceSession.LlmInferenceSessionOptions.builder().setTopK(65).setTopP(0.95f) .setTemperature(0.45f) .build() ) instance = LlmModelInstance(engine = llmInference, session = session) } catch (e: Exception) { onDone(e.message.toString()) return }
Other info / Complete Logs
Failed to initialize engine: %sINTERNAL: RET_CHECK failure (third_party/odml/litert_lm/runtime/util/scoped_file_posix.cc:27) fd >= 0 (-1 vs. 0) open() failed: /sdcard/llm/gemma3-1b-it-int4.task
=== Source Location Trace: ===
third_party/odml/litert_lm/runtime/util/scoped_file_posix.cc:27
third_party/odml/infra/genai/inference/utils/llm_utils/config_utils.cc:772
third_party/odml/infra/genai/inference/utils/llm_utils/config_utils.cc:472
third_party/odml/infra/genai/inference/llm_engine.cc:2132