llama-cpp-python
llama-cpp-python copied to clipboard
DeepSeek-R1-Distill-Qwen-1.5B inference answer is wrong
I encountered the following problem when using llama cpp python on Mac. The model's answer is completely unreasonable. The red box contains the question and answer.
The configuration is as follows: model: DeepSeek-R1-Distill-Qwen-1.5B.gguf Quantization: Q4_K_M llama_cpp_python: 0.3.7 API: create_chat_completion model input:[{'role': 'system', 'content': 'you are a helpful assistant'}, {'role': 'user', 'content': 'There are 20 chickens and rabbits in a cage. It is known that these chickens and rabbits have 56 legs in total. Chickens have two legs and rabbits have four legs. How many chickens and rabbits are there in the cage?'}]