QNN backend fails on second model load (DMA-BUF preregistration issue)
🐛 Describe the bug
Hi, I am currently facing an issue when using ExecuTorch with the QNN backend on Android. The model loads and runs correctly the first time, but after I unload (reset) the model and try to load it again within the same app session, the QNN backend fails with the error PreRegisterMem failed to get file descriptor / Fail to initialize Qnn Manager.
PreRegisterMem failed to get file descriptor.
Fail to pre register custom memory handle
Fail to initialize Qnn Manager
#if defined(__aarch64__)
ET_CHECK_OR_RETURN_ERROR(
PreRegisterMem() == Error::Ok,
Internal,
"Fail to pre register custom memory handle");
#endif
This happens specifically after QNN tries to preregister external memory (DMA-BUF / AHardwareBuffer).
Versions
SM8550
ExecuTorch version: 1.0 / Nightly (please update if different)
Backend: QNN (HTP / DSP)
Device: Qualcomm Snapdragon (CDSP)
OS / Platform: Android (via JNI)
Model: LLaMA .pte (Hybrid KV cache mode enabled)
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin
@haowhsu-quic @shewu-quic @winskuo-quic @DannyYuyang-quic can we have someone to take a look at this?
No more logs or QNN error codes?
Thank you for your effort. This error is actually due to some legacy code, which we no longer use. I believe we can remove it. https://github.com/pytorch/executorch/blob/b1e3e28bb611e06d484138be27221faffd89f565/backends/qualcomm/runtime/QnnManager.cpp#L359
Thanks for your reply. Do you have any plans to update or improve this part in the future?
Thank you for your effort. This error is actually due to some legacy code, which we no longer use. I believe we can remove it.
executorch/backends/qualcomm/runtime/QnnManager.cpp
Line 359 in b1e3e28
ET_CHECK_OR_RETURN_ERROR(
Hi @nambn007, Sure, I'm working on it. I'll let you know if there are any updates.