llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

pyinstaller hook script

Open earonesty opened this issue 1 year ago • 12 comments

copies the dll around so pyinstaller works

if anyone needs it

works on windows/linux

osx seems to work too.

earonesty avatar Sep 13 '23 19:09 earonesty

Very helpful thanks!

903124 avatar Oct 06 '23 11:10 903124

@earonesty , till this PR gets merged, can we do this manually by modifying the existing .spec file generated from pyinstaller?

bishwenduk029 avatar Oct 09 '23 11:10 bishwenduk029

You can just specify an additional hooks directory in the command line when you build

earonesty avatar Oct 09 '23 13:10 earonesty

this is great, thank you

inferense avatar Oct 18 '23 17:10 inferense

FYI on Mac I'm also seeing libllama.dylib. I edited the hook file like so and it's working great.

elif sys.platform == 'darwin':  # Mac
    so_path = os.path.join(package_path, 'llama_cpp', 'libllama.dylib')
    datas.append((so_path, 'llama_cpp'))

robertritz avatar Nov 09 '23 03:11 robertritz

On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck

llama_new_context_with_model: kv self size  = 1000.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 

demattosanthony avatar Dec 04 '23 15:12 demattosanthony

probably need to add "ggml-metal.metal" to the list of files picked up by the hook.

On Mon, Dec 4, 2023 at 10:55 AM demattosanthony @.***> wrote:

On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck

llama_new_context_with_model: kv self size = 1000.00 MiB llama_build_graph: non-view tensors processed: 740/740 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Pro ggml_metal_init: picking default device: Apple M1 Pro ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd ggml_metal_init: loading 'ggml-metal.metal' ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} llama_new_context_with_model: ggml_metal_init() failed AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |

— Reply to this email directly, view it on GitHub https://github.com/abetlen/llama-cpp-python/pull/709#issuecomment-1838941413, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMUP5JNQYST3LD3I6VVLYHXW77AVCNFSM6AAAAAA4W62WCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZYHE2DCNBRGM . You are receiving this because you were mentioned.Message ID: @.***>

earonesty avatar Dec 04 '23 16:12 earonesty

@earonesty I added it to the datas array and rebuilt it but still failing

demattosanthony avatar Dec 04 '23 16:12 demattosanthony

Hi @earonesty! I get an error when running pyinstaller --additional-hooks-dir=./hooks main.py with the hooks folder created and your script file in the folder:

Unable to find '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.so' when adding binary and data files.

Know how to solve it potentially?

I am trying to package a tkinter file using pyinstaller and my tkinter file has llama-cpp-python installed and imported.

eric-prog avatar Jan 06 '24 06:01 eric-prog

@eric-prog Just a guess, but I believe the name for the artifact was changed from "llama.so" to "libllama.so". Same goes for the dylib and dll artifacts.

Making that small change in the script worked for me! But you can verify in your own environment by checking ".venv/lib/python3.11/site-packages/llama_cpp" in your project (Note: you may need to replace python3.11 with the version you're using in your venv) to see the names of the build artifacts

averypfeiffer avatar Jan 22 '24 20:01 averypfeiffer

Unfortunately, this pull request doesn't fix the issue for me. Adding the above-mentioned ./hooks folder with the hook-llama_cpp.py file (as per commit 3a9227c, with libllama.so) doesn't fix the problem for me on Linux. PyInstaller produces the executable but when you try to run it, it fails with the same error

FileNotFoundError: Shared library with base name 'llama' not found

as in issue #1475 .

alexeygridnev avatar Aug 03 '24 21:08 alexeygridnev

Cool, but I had to change this:

dll_path = os.path.join(package_path, 'llama_cpp', 'llama.dll')

To this:

dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'llama.dll')

For my project to work!

gudarzi avatar Sep 20 '24 19:09 gudarzi