gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

QWEN/R1 (1.5B) does not cleanly install when python is run

Open Alexx220 opened this issue 8 months ago • 1 comments
trafficstars

from gpt4all import GPT4All # type: ignore model = GPT4All("DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf") output = model.generate("The capital of France is ", max_tokens=3) print(output)

error output after model self-downloads: llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen' llama_load_model_from_file: failed to load model LLAMA ERROR: failed to load model from /home/me/.cache/gpt4all/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf LLaMA ERROR: prompt won't work with an unloaded model!

Alexx220 avatar Mar 09 '25 00:03 Alexx220