llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

pip install llama-cpp-python on anaconda

Open werruww opened this issue 1 year ago • 5 comments

(base) C:\Users\m>pip install llama-cpp-python Collecting llama-cpp-python Using cached llama_cpp_python-0.2.85.tar.gz (49.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: typing-extensions>=4.5.0 in c:\users\m\appdata\local\anaconda3\lib\site-packages (from llama-cpp-python) (4.7.1) Requirement already satisfied: numpy>=1.20.0 in c:\users\m\appdata\local\anaconda3\lib\site-packages (from llama-cpp-python) (1.24.3) Requirement already satisfied: diskcache>=5.6.1 in c:\users\m\appdata\local\anaconda3\lib\site-packages (from llama-cpp-python) (5.6.3) Requirement already satisfied: jinja2>=2.11.3 in c:\users\m\appdata\local\anaconda3\lib\site-packages (from llama-cpp-python) (3.1.2) Requirement already satisfied: MarkupSafe>=2.0 in c:\users\m\appdata\local\anaconda3\lib\site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.1) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... | ########He stands and does not complete

werruww avatar Aug 05 '24 11:08 werruww

conda install conda-forge::llama-cpp-python not work

werruww avatar Aug 05 '24 11:08 werruww

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. gensim 4.3.0 requires FuzzyTM>=0.4.0, which is not installed. tables 3.8.0 requires blosc2~=2.0.0, which is not installed. tables 3.8.0 requires cython>=0.29.21, which is not installed. numba 0.57.1 requires numpy<1.25,>=1.21, but you have numpy 2.0.1 which is incompatible. scipy 1.11.1 requires numpy<1.28.0,>=1.21.6, but you have numpy 2.0.1 which is incompatible.

werruww avatar Aug 05 '24 12:08 werruww

I had the same issue. I got it working using uv to install the package: uv pip install -U llama-cpp-python The difference seems to be that pip tried to build the wheel instead of using the prebuilt one, but uv did the right thing.

jcardonnet avatar Aug 05 '24 21:08 jcardonnet

pip install -U llama-cpp-python

.uvuv pip install -U llama-cpp-pythonpipuv

What do you mean and what is the correct installation command that stops in pyproject.toml?

werruww avatar Aug 06 '24 01:08 werruww

(u) C:\Users\m\Desktop\1>python 1.py Traceback (most recent call last): File "C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\llama_cpp.py", line 75, in _load_shared_library return ctypes.CDLL(str(lib_path), **cdll_args) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\ctypes_init.py", line 376, in init self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: Could not find module 'C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\lib\llama.dll' (or one of its dependencies). Try using the full path with constructor syntax.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\m\Desktop\1\1.py", line 2, in from llama_cpp import Llama File "C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp_init_.py", line 1, in from .llama_cpp import * File "C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\llama_cpp.py", line 88, in _lib = _load_shared_library(_lib_base_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\llama_cpp.py", line 77, in _load_shared_library raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}") RuntimeError: Failed to load shared library 'C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\lib\llama.dll': Could not find module 'C:\Users\m\AppData\Local\anaconda3\envs\u\Lib\site-packages\llama_cpp\lib\llama.dll' (or one of its dependencies). Try using the full path with constructor syntax.

(u) C:\Users\m\Desktop\1>

werruww avatar Aug 06 '24 01:08 werruww

Have a similar issue around here. Any solution yet?.. Didn't work with uv either. I tried with different versions >=0.3.0 but still throwing the same:

Using Python 3.9.18 environment at: /home/valen/miniconda3/envs/cenv
Resolved 6 packages in 41.48s
  × Failed to build `llama-cpp-python==0.3.0`
  ├─▶ The build backend returned an error
  ╰─▶ Call to `scikit_build_core.build.build_wheel` failed (exit status: 1)
      [stdout]
      *** scikit-build-core 0.10.7 using CMake 3.22.1 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmpat9gg5zr/build/CMakeInit.txt
      -- The C compiler identification is GNU 11.4.0
      -- The CXX compiler identification is GNU 11.4.0
...
    -- Build files have been written to: /tmp/tmpat9gg5zr/build
      *** Building project with Ninja...
[32/33] : && /usr/bin/g++  -pthread -B /home/valen/miniconda3/envs/cenv/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o
      vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli
      -Wl,-rpath,/tmp/tmpat9gg5zr/build/vendor/llama.cpp/src:/tmp/tmpat9gg5zr/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so
      vendor/llama.cpp/ggml/ssrc/libggml.so && :
      FAILED: vendor/llama.cpp/examples/llava/llama-llava-cli
      : && /usr/bin/g++  -pthread -B /home/valen/miniconda3/envs/cenv/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o
      vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli
      -Wl,-rpath,/tmp/tmpat9gg5zr/build/vendor/llama.cpp/src:/tmp/tmpat9gg5zr/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so
      vendor/llama.cpp/ggml/src/libggml.so && :
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link)
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_barrier@GOMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_parallel@GOMP_4.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_thread_num@OMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_single_start@GOMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_num_threads@OMP_1.0'
      collect2: error: ld returned 1 exit status
      [33/33] : && /usr/bin/g++  -pthread -B /home/valen/miniconda3/envs/cenv/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o
      vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli
      -Wl,-rpath,/tmp/tmpat9gg5zr/build/vendor/llama.cpp/src:/tmp/tmpat9gg5zr/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so
      vendor/llama.cpp/ggml/src/libggml.so && :
      FAILED: vendor/llama.cpp/examples/llava/llama-minicpmv-cli
      : && /usr/bin/g++  -pthread -B /home/valen/miniconda3/envs/cenv/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o
      vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli
      -Wl,-rpath,/tmp/tmpat9gg5zr/build/vendor/llama.cpp/src:/tmp/tmpat9gg5zr/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so
      vendor/llama.cpp/ggml/src/libggml.so && :
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link)
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_barrier@GOMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_parallel@GOMP_4.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_thread_num@OMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_single_start@GOMP_1.0'
      /home/valen/miniconda3/envs/cenv/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_num_threads@OMP_1.0'
      collect2: error: ld returned 1 exit status
      ninja: build stopped: subcommand failed.

      [stderr]
      INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      INSTALL TARGETS - target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      INSTALL TARGETS - target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.

      *** CMake build failed

      hint: This usually indicates a problem with the package or the build environment.

ValenButtignol avatar Jan 28 '25 22:01 ValenButtignol

For me with conda it worked to build with LD_LIBRARY_PATH="<path_to>/miniconda3/envs/comfy/lib" pip install llama-cpp-python

kampelmuehler avatar Jan 29 '25 11:01 kampelmuehler

Wow that worked with version 0.3.7... Thanks!

ValenButtignol avatar Jan 29 '25 20:01 ValenButtignol