llama.cpp
llama.cpp copied to clipboard
Compile bug: ./llama-server: symbol lookup error: ./llama-server: undefined symbol: llama_vocab_eos
Git commit
73e2ed3ce3492d3ed70193dd09ae8aa44779651d
Operating systems
Linux
GGML backends
CUDA
Problem description & steps to reproduce
I am trying to host model using llama-server. Successfully built llama.cpp package but getting error when running llama-server.
./build/bin/llama-server: symbol lookup error: ./build/bin/llama-server: undefined symbol: llama_vocab_eos
Exact same error while running model via llama-cli - ./build/bin/llama-cli: symbol lookup error: ./build/bin/llama-cli: undefined symbol: llama_vocab_eos
First Bad Commit
No response
Compile command
./build/bin/llama-server \
-m /path/to/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf \
--threads 6 \
--n-gpu-layers 1000 \
--ctx-size 8192 \
--host 0.0.0.0 \
--port 8080 \
-fa
Relevant log output
[ 79%] Building CXX object examples/main/CMakeFiles/llama-cli.dir/main.cpp.o
[ 80%] Linking CXX executable ../../bin/llama-cli
[ 80%] Built target llama-cli
[ 84%] Building CXX object examples/server/CMakeFiles/llama-server.dir/server.cpp.o
[ 85%] Linking CXX executable ../../bin/llama-server
[ 85%] Built target llama-server