llama.cpp
llama.cpp copied to clipboard
Misc. bug: Rpc-server does not use opencl backend on Android.
Name and Version
ggml_opencl: using kernels optimized for Adreno (GGML_OPENCL_USE_ADRENO_KERNELS) version: 4727 (c2ea16f2) built with Android (11349228, +pgo, +bolt, +lto, -mlgo, based on r487747e) clang version 17.0.2 (https://android.googlesource.com/toolchain/llvm-project d9f89f4d16663d5012e5c09495f3b30ece3d2362) for x86_64-unknown-linux-gnu
Operating systems
Other? (Please let us know in description)
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
./rpc-server --port 8880
Starting RPC server on 127.0.0.1:8880, backend memory: 15160 MB
Problem description & steps to reproduce
If you use vulkan, it can be used normally in Android, but opencl will prompt that you are using the cpu backend.
Llama-cli can use gpu when using opencl.
First Bad Commit
No response
Relevant log output
create_backend: using CPU backend