Wrong answer of Basic Usage
Hi,
I run the Basic Usage by python run_inference.py -m models/Llama3-8B-1.58-100B-tokens/ggml-model-i2_s.gguf -p "Daniel went back to the the the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John went to the bedroom. Mary went back to the garden. Where is Mary?\nAnswer:" -n 6 -temp 0,
But I don't get the expected answer. Here is the output:
Daniel went back to the the the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John went to the bedroom. Mary went back to the garden. Where is Mary? Answer:imersimersimersimersimersimers
Could you point out where the issue lies?
Best, Xiaoming
same issue
I got the correct answer for the given example, but for other questions I tried, the model gave unexpectedly strange answers. Some answers were cut off in the middle of a sentence, and some were completely irrelevant. I still haven't figured out how to fix this problem.
The current models available are not instruct models so they won't give you answer like you are used to but instead act more like autocompletes.
I have similar problems, even basic questions such as "what is the square root of a number" gives random answer, or a pseudo-correct answer follow by garbage. I tried all of the 3 models in the readme
We strongly recommend to use our official bitnet model, thanks. https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf