LocalAIVoiceChat icon indicating copy to clipboard operation
LocalAIVoiceChat copied to clipboard

Loading Model

Open witchpiggie opened this issue 4 months ago • 0 comments

I seem to be having a problem getting the model to load.

I downloaded the model specified and put it in a 'models' sub-folder of the repository. I set the 'mode_path' in 'creation_params.json' as specified.

However, I get the error below every time. It doesn't matter what folder I put it in (outside the repo folder or inside). It doesn't matter if I use a full path or a relative one.

<path_to_repo> = The path where the repo was cloned.

File "<path_to_repo>\venv\lib\site-packages\llama_cpp_internals.py", line 58, in init raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: .\models\zephyr-7b-beta.Q5_K_M.gguf

Any help would be appreciated.

witchpiggie avatar Jul 29 '25 15:07 witchpiggie