llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

CDLL llama_init_from_file function

Open Holpak opened this issue 2 years ago • 0 comments

I try to load a model and get this error:

llama.cpp: loading model from ggml-model.bin
Traceback (most recent call last):
  File "D:\Projects\llama-cpp-python-test\main.py", line 2, in <module>
    llm = Llama(model_path="ggml-model.bin")
  File "D:\Program Files\Python310\lib\site-packages\llama_cpp\llama.py", line 107, in __init__
    self.ctx = llama_cpp.llama_init_from_file(
  File "D:\Program Files\Python310\lib\site-packages\llama_cpp\llama_cpp.py", line 152, in llama_init_from_file
    return _lib.llama_init_from_file(path_model, params)
OSError: [WinError -1073741795] Windows Error 0xc000001d
Exception ignored in: <function Llama.__del__ at 0x000002595C895BD0>
Traceback (most recent call last):
  File "D:\Program Files\Python310\lib\site-packages\llama_cpp\llama.py", line 785, in __del__
    if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'

After searching around in the code, I came to the conclusion that the error is caused by trying to execute the llama_init_from_file function, which is loaded from the shared library.

def llama_init_from_file(
    path_model: bytes, params: llama_context_params
) -> llama_context_p:
    return _lib.llama_init_from_file(path_model, params)

Shared library is loaded in normal mode and all other functions in it work. Only llama_init_from_file causes an error.

Holpak avatar Apr 19 '23 17:04 Holpak