gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf unable to open using python SDK.

Open evanpurl opened this issue 1 year ago • 0 comments

Bug Report

When attempting to load Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf via the Python SDK, I get the below error.

llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got 291
llama_load_model_from_file_gpt4all: failed to load model
LLAMA ERROR: failed to load model from model/Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf
LLaMA ERROR: prompt won't work with an unloaded model!

Steps to Reproduce

  1. Set model name to Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf in Python SDK code.
  2. Download / Load the model.
  3. Try to generate a prompt.

Expected Behavior

I was expecting the model to load, and the prompt to generate.

Your Environment

  • GPT4All version (if applicable): Python package 2.7.0
  • Operating System: Ubuntu 22.04.4
  • Chat model used (if applicable): Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf

evanpurl avatar Jul 30 '24 00:07 evanpurl