gpt4all
gpt4all copied to clipboard
QWEN/R1 (1.5B) does not cleanly install when python is run
trafficstars
from gpt4all import GPT4All # type: ignore model = GPT4All("DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf") output = model.generate("The capital of France is ", max_tokens=3) print(output)
error output after model self-downloads: llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen' llama_load_model_from_file: failed to load model LLAMA ERROR: failed to load model from /home/me/.cache/gpt4all/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf LLaMA ERROR: prompt won't work with an unloaded model!