build error for version 0.2.81 and 0.2.80
Prerequisites
When I install via pip install llama-cpp-python, there will be an error. It will occur on versions 0.2.81 and 0.2.80, The version 0.2.79 can be successfully installed.
python 3.11.9 ubuntu 22.04
# Failure Information (for bugs) [26/26] : && /usr/bin/g++ -pthread -B /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli -Wl,-rpath,/tmp/tmp2secpzim/build/vendor/llama.cpp/src:/tmp/tmp2secpzim/build/vendor/llama.cpp/ggml/src: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/src/libllama.so vendor/llama.cpp/ggml/src/libggml.so && : FAILED: vendor/llama.cpp/examples/llava/llama-llava-cli : && /usr/bin/g++ -pthread -B /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli -Wl,-rpath,/tmp/tmp2secpzim/build/vendor/llama.cpp/src:/tmp/tmp2secpzim/build/vendor/llama.cpp/ggml/src: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/src/libllama.so vendor/llama.cpp/ggml/src/libggml.so && : /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link) /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_barrier@GOMP_1.0'
/home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_parallel@GOMP_4.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to omp_get_thread_num@OMP_1.0'
/home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to GOMP_single_start@GOMP_1.0' /home/szh/vs/gpt/text-generation-webui/installer_files/conda/envs/inference/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to omp_get_num_threads@OMP_1.0'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
*** CMake build failed
Disabling llava module fixes this on my end:
CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-python
I am using Ubuntu through WSL2 on Windows 11.
Disabling llava module fixes this on my end:
CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-pythonI am using Ubuntu through WSL2 on Windows 11.
This command did not work for me on Arch Linux and produced the same error as above
set CMAKE_ARGS="-DLLAMA_CUDA=on -DLLAVA_BUILD=off" && set FORCE_CMAKE=1 && pip install llama-cpp-python --no-cache-dir
Thanks @Rybens92 - that worked for me when I combined it with my existing settings, like so, on Arch Linux under a conda env with Python 3.11:
CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" pip install llama-cpp-python --upgrade --force-reinstall
I get that this particular issue will need a change somewhere to resolve it, but independently I think the README could do with an update to point people away from LLAMA_CUBLAS and toward GGML_CUDA.
Solved for me:
CMAKE_ARGS="-DCMAKE_CXX_FLAGS=-fopenmp" pip install llama-cpp-python
Disabling llava module fixes this on my end:
CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-pythonI am using Ubuntu through WSL2 on Windows 11.
thanks ,it's worked for me
Disabling llava module fixes this on my end:
CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-pythonI am using Ubuntu through WSL2 on Windows 11.
Worked for me too
Disabling llava module fixes this on my end:
CMAKE_ARGS="-DLLAVA_BUILD=OFF" pip install -U llama-cpp-pythonI am using Ubuntu through WSL2 on Windows 11.
Is there any other solution? Because I am going to use llava
Super useful, this worked for me CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" pip install llama-cpp-python --upgrade --force-reinstall on Linux Mint Cinnamon, thanks
Solved for me:
CMAKE_ARGS="-DCMAKE_CXX_FLAGS=-fopenmp" pip install llama-cpp-python
Solved it for me!
What does disabling LLAVA_BUILD do? Do we lose any functionality?
What does disabling LLAVA_BUILD do? Do we lose any functionality?
Probably Llava compatibility, they're VLMs.