llama-cpp-python
llama-cpp-python copied to clipboard
pyinstaller hook script
copies the dll around so pyinstaller works
if anyone needs it
works on windows/linux
osx seems to work too.
Very helpful thanks!
@earonesty , till this PR gets merged, can we do this manually by modifying the existing .spec file generated from pyinstaller?
You can just specify an additional hooks directory in the command line when you build
this is great, thank you
FYI on Mac I'm also seeing libllama.dylib. I edited the hook file like so and it's working great.
elif sys.platform == 'darwin': # Mac
so_path = os.path.join(package_path, 'llama_cpp', 'libllama.dylib')
datas.append((so_path, 'llama_cpp'))
On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck
llama_new_context_with_model: kv self size = 1000.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
probably need to add "ggml-metal.metal" to the list of files picked up by the hook.
On Mon, Dec 4, 2023 at 10:55 AM demattosanthony @.***> wrote:
On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck
llama_new_context_with_model: kv self size = 1000.00 MiB llama_build_graph: non-view tensors processed: 740/740 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Pro ggml_metal_init: picking default device: Apple M1 Pro ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd ggml_metal_init: loading 'ggml-metal.metal' ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} llama_new_context_with_model: ggml_metal_init() failed AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
— Reply to this email directly, view it on GitHub https://github.com/abetlen/llama-cpp-python/pull/709#issuecomment-1838941413, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMUP5JNQYST3LD3I6VVLYHXW77AVCNFSM6AAAAAA4W62WCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZYHE2DCNBRGM . You are receiving this because you were mentioned.Message ID: @.***>
@earonesty I added it to the datas array and rebuilt it but still failing
Hi @earonesty! I get an error when running pyinstaller --additional-hooks-dir=./hooks main.py
with the hooks folder created and your script file in the folder:
Unable to find '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.so' when adding binary and data files.
Know how to solve it potentially?
I am trying to package a tkinter file using pyinstaller and my tkinter file has llama-cpp-python installed and imported.
@eric-prog Just a guess, but I believe the name for the artifact was changed from "llama.so" to "libllama.so". Same goes for the dylib and dll artifacts.
Making that small change in the script worked for me! But you can verify in your own environment by checking ".venv/lib/python3.11/site-packages/llama_cpp" in your project (Note: you may need to replace python3.11 with the version you're using in your venv) to see the names of the build artifacts
Unfortunately, this pull request doesn't fix the issue for me. Adding the above-mentioned ./hooks folder with the hook-llama_cpp.py file (as per commit 3a9227c, with libllama.so) doesn't fix the problem for me on Linux. PyInstaller produces the executable but when you try to run it, it fails with the same error
FileNotFoundError: Shared library with base name 'llama' not found
as in issue #1475 .
Cool, but I had to change this:
dll_path = os.path.join(package_path, 'llama_cpp', 'llama.dll')
To this:
dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'llama.dll')
For my project to work!