gpt4all
gpt4all copied to clipboard
Calling generate before opening a chat session (with a system prompt specified) consistently malfunctions
Documentation
So i am using the following code. While using it i was quite confused why the answers are not good. I know the computer i am using is sub-optimally, but for most workload it's fine.
Anyway. i am just using the default example (index.html) and are able to replicate getting a 2,0 whenever i ask about the capital in France. Without any system prompt specified, the result is not helpful either. However if i specify an empty system prompt it works.
I included multiple runs of the programm. I might be doing something wrong here, but i feel like the default code should work out-of-the-box? Maybe it's just my Machine though ¯_(ツ)_/¯ .
from gpt4all import GPT4All
def send_message(model, prompt="The capital of France is "):
output = model.generate(prompt, max_tokens=3)
print(prompt + '\n\t' + output)
model = GPT4All("mistral-7b-openorca.gguf2.Q4_0.gguf", device="cpu")
send_message(model)
with model.chat_session():
send_message(model)
with model.chat_session(''):
send_message(model)
Failed to load llamamodel-mainline-cuda-avxonly.dll: LoadLibraryExW failed with error 0x7e
Failed to load llamamodel-mainline-cuda.dll: LoadLibraryExW failed with error 0x7e
The capital of France is
2,0
The capital of France is
The capital of
The capital of France is
Paris.
Process finished with exit code 0
Failed to load llamamodel-mainline-cuda-avxonly.dll: LoadLibraryExW failed with error 0x7e
Failed to load llamamodel-mainline-cuda.dll: LoadLibraryExW failed with error 0x7e
The capital of France is
1,0
The capital of France is
The capital of
The capital of France is
Paris.
Process finished with exit code 0