Manual install of `llama-cpp-python` on M1 mac
Just wanted to capture this...
I'm having to manually run the following on M1 machine to install llama, otherwise cmake fails.
CMAKE_ARGS="-DLLAMA_METAL=on -DCMAKE_OSX_ARCHITECTURES=arm64" FORCE_CMAKE="1" pip install 'llama-cpp-python[server]==0.3.4'
Would be nice if Lumen added these flags if M1 system detected..
If that's possible we should upstream it to llama-cpp-python so much more people would benefit
Likely related: https://github.com/abetlen/llama-cpp-python/issues/1956
Actually the solution probably should be getting anaconda &/ conda-forge channel's updated. I noticed https://anaconda.org/conda-forge/llama-cpp-python is 6 months old.
I don't think llama-cpp-python has had a new release in the past 6 months
I tried reinstalling and got it to work like this:
brew install llama-cpp
export LLAMA_CPP_LIB=$(brew --prefix)/lib/libllama.dylib
export CMAKE_ARGS="-DLLAMA_BUILD=OFF -DLLAMA_CPP_LIB=$LLAMA_CPP_LIB"
# Install Python bindings without rebuilding llama.cpp
pip install llama-cpp-python --no-cache-dir
Hm okay nevermind got too excited:
AttributeError: dlsym(0x8f1c13f0, llama_model_load_from_file): symbol not found. Did you mean: 'llama_load_model_from_file'?
Actually I think:
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python==0.3.5 --no-cache-dir --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
works on 3.11?
This seems to have worked for me:
pip install llama-cpp-python \
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
Installing collected packages: llama-cpp-python
Attempting uninstall: llama-cpp-python
Found existing installation: llama_cpp_python 0.3.4
Uninstalling llama_cpp_python-0.3.4:
Successfully uninstalled llama_cpp_python-0.3.4
Successfully installed llama-cpp-python-0.3.15
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.