ai-hub-models
ai-hub-models copied to clipboard
[ERROR] "Unable to load backend. dlerror(): libcdsprpc.so: cannot open shared object file: No such file or directory"
I have already run the Qwen2-7B model on my mobile phone using genie-t2t-run, and now I want to simulate running it on my computer, but I encountered an error.
./genie-t2t-run -c qwen2_7b_instruct.json -p "tell me about qualcomm"
reported this error
Using libGenie.so version 1.4.0
[ERROR] "Unable to load backend extensions interface config."
[ERROR] "Unable to initialize backend extensions."
[WARN] "Failure in initializing backend extensions."
[INFO] "Using create From Binary"
[ERROR] "Unable to load backend. dlerror(): libcdsprpc.so: cannot open shared object file: No such file or directory"
[ERROR] "Failed to initialize buffer manager"
[ERROR] "qnn-htp: failure to initialize IOTensor"
[ERROR] "Create From Binary FAILED!"
Failure to initialize model
Failed to create the dialog.
how to fix this?
Hi @chenjun2hao , we'll need more info to help you debug this issue further. It seems like an OS issue, can you kindly provide the following information: what device are you trying to run the model on now and what enviro variable are you setting? Thanks!
Hi @mestrona-3, I have a similar issue when running in a docker container(Ubuntu 22.04)
FROM ubuntu:22.04
# Arguments for versions or paths
ARG QAIRT_SDK_HOST_PATH=.
ARG PYTHON_VERSION=3.10
ARG ONNXRUNTIME_VERSION_TAG=v1.18.0
ENV QAIRT_SDK_PATH=/opt/qairt
ENV DEBIAN_FRONTEND=noninteractive
# Install basic utilities, Python, and build tools
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git \
wget \
unzip \
vim \
curl \
build-essential \
cmake \
ca-certificates \
python${PYTHON_VERSION} \
python3-distutils \
libpython${PYTHON_VERSION} \
python3-pip \
build-essential \
clang-14 \
python${PYTHON_VERSION}-venv && \
rm -rf /var/lib/apt/lists/*
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1
# Install common Python ML packages
RUN python3 -m pip install --no-cache-dir \
numpy \
onnx==1.16.1 \
onnxruntime==1.18.0 \
onnxsim==0.4.36 \
pycocotools \
transformers \
tokenizers \
sacrebleu \
scikit-learn \
OpenNMT-py \
sentencepiece
# Download and extract QNN SDK from Qualcomm Software Center
RUN wget -O /tmp/qnn_sdk.zip "https://softwarecenter.qualcomm.com/api/download/software/sdks/Qualcomm_AI_Runtime_Community/All/2.34.0.250424/v2.34.0.250424.zip" && \
unzip /tmp/qnn_sdk.zip -d /opt && \
rm /tmp/qnn_sdk.zip
# Automatically detect QNN SDK version folder
RUN export QNN_SDK_VERSION=$(ls -1 /opt/qairt | head -n 1) && \
echo "QNN_SDK_VERSION=$QNN_SDK_VERSION" && \
echo "export QNN_SDK_VERSION=$QNN_SDK_VERSION" >> /root/.bashrc && \
echo "export QNN_SDK_ROOT=/opt/qairt/$QNN_SDK_VERSION" >> /root/.bashrc
# Set environment variables for the rest of the build
ENV QNN_SDK_VERSION="2.34.0.250424"
ENV QNN_SDK_ROOT="/opt/qairt/2.34.0.250424"
# Set up QAIRT SDK environment
RUN bash -c "source ${QNN_SDK_ROOT}/bin/envsetup.sh && \
echo 'export QNN_SDK_ROOT=${QNN_SDK_ROOT}' >> /root/.bashrc && \
\${QNN_SDK_ROOT}/bin/check-linux-dependency.sh --yes && \
\${QNN_SDK_ROOT}/bin/envcheck -c"
# Create and activate Python virtual environment, then install pip
ENV QNN_VENV_PATH=/opt/qnn_venv
RUN python3 -m venv ${QNN_VENV_PATH} --without-pip && \
. ${QNN_VENV_PATH}/bin/activate && \
python3 -m ensurepip --upgrade && \
which pip3
# Update all Python dependencies inside the venv
RUN . ${QNN_VENV_PATH}/bin/activate && \
python3 "${QNN_SDK_ROOT}/bin/check-python-dependency"
## HTP and DSP Setup
# # Install Qualcomm Package Manager (QPM) CLI
# # Seems to be interactive login, so we need to handle that.
# COPY QualcommPackageManager3.3.0.121.7.Linux-x86.deb /tmp/
# RUN dpkg -i /tmp/QualcommPackageManager3.3.0.121.7.Linux-x86.deb && \
# rm /tmp/QualcommPackageManager3.3.0.121.7.Linux-x86.deb
# RUN qpm-cli --version && echo "QPM CLI installed successfully" || (echo "QPM CLI installation failed" && exit 1)
# RUN qpm-cli --login ${QPM_USERNAME} ${QPM_PASSWORD}
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
WORKDIR /workspace
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/bin/bash"]
We only provide QNN Context Binaries for Qwen 2 7B for 8 Elite (mobile). We don't provide the bin file compatible with Snapdragon X Elite. Given that, it's expected that these don't run on your computer.
We are working towards making all the LLMs be recipes like Llama models. This is a huge undertaking and will take us sometime to achieve the goal. Sorry for the delayed response here.