llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

build error for version 0.2.81 and 0.2.80

Open charliboy opened this issue 1 year ago • 11 comments

Prerequisites

When I install via pip install llama-cpp-python, there will be an error. It will occur on versions 0.2.81 and 0.2.80, The version 0.2.79 can be successfully installed.

python 3.11.9 ubuntu 22.04

# Failure Information (for bugs) [26/26] : && /usr/bin/g++ -pthread -B /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli -Wl,-rpath,/tmp/tmp2secpzim/build/vendor/llama.cpp/src:/tmp/tmp2secpzim/build/vendor/llama.cpp/ggml/src: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/src/libllama.so vendor/llama.cpp/ggml/src/libggml.so && : FAILED: vendor/llama.cpp/examples/llava/llama-llava-cli : && /usr/bin/g++ -pthread -B /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli -Wl,-rpath,/tmp/tmp2secpzim/build/vendor/llama.cpp/src:/tmp/tmp2secpzim/build/vendor/llama.cpp/ggml/src: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/src/libllama.so vendor/llama.cpp/ggml/src/libggml.so && : /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link) /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_barrier@GOMP_1.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_parallel@GOMP_4.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to omp_get_thread_num@OMP_1.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_single_start@GOMP_1.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to omp_get_num_threads@OMP_1.0' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed.

  *** CMake build failed

charliboy avatar Jul 05 '24 00:07 charliboy

Disabling llava module fixes this on my end:

CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python

I am using Ubuntu through WSL2 on Windows 11.

Rybens92 avatar Jul 05 '24 15:07 Rybens92

Disabling llava module fixes this on my end:

CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python

I am using Ubuntu through WSL2 on Windows 11.

This command did not work for me on Arch Linux and produced the same error as above

set CMAKE_ARGS="-DLLAMA_CUDA=on -DLLAVA_BUILD=off" && set FORCE_CMAKE=1 && pip install llama-cpp-python --no-cache-dir

justme1135 avatar Jul 08 '24 07:07 justme1135

Thanks @Rybens92 - that worked for me when I combined it with my existing settings, like so, on Arch Linux under a conda env with Python 3.11:

CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" pip install llama-cpp-python --upgrade --force-reinstall

I get that this particular issue will need a change somewhere to resolve it, but independently I think the README could do with an update to point people away from LLAMA_CUBLAS and toward GGML_CUDA.

nmstoker avatar Jul 08 '24 13:07 nmstoker

Solved for me:

CMAKE_ARGS="-DCMAKE_CXX_FLAGS=-fopenmp" pip install llama-cpp-python

LEON-REIN avatar Aug 05 '24 03:08 LEON-REIN

Disabling llava module fixes this on my end:

CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python

I am using Ubuntu through WSL2 on Windows 11.

thanks ,it's worked for me

charliboy avatar Aug 09 '24 10:08 charliboy

Disabling llava module fixes this on my end:

CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python

I am using Ubuntu through WSL2 on Windows 11.

Worked for me too

javi22020 avatar Aug 22 '24 22:08 javi22020

Disabling llava module fixes this on my end:

CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python

I am using Ubuntu through WSL2 on Windows 11.

Is there any other solution? Because I am going to use llava

yimuu avatar Sep 22 '24 11:09 yimuu

Super useful, this worked for me CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" pip install llama-cpp-python --upgrade --force-reinstall on Linux Mint Cinnamon, thanks

ianozsvald avatar Oct 24 '24 12:10 ianozsvald

Solved for me:

CMAKE_ARGS="-DCMAKE_CXX_FLAGS=-fopenmp" pip install llama-cpp-python

Solved it for me!

victorolaiya avatar Oct 30 '24 16:10 victorolaiya

What does disabling LLAVA_BUILD do? Do we lose any functionality?

kpm avatar Mar 11 '25 23:03 kpm

What does disabling LLAVA_BUILD do? Do we lose any functionality?

Probably Llava compatibility, they're VLMs.

javi22020 avatar Mar 12 '25 06:03 javi22020