elhambb

Results 2 comments of elhambb

I tried top_k=1 but it didn't solve the problem. The results are the same

I tried `temperature 2 output = llm.create_completion(prompt, 3 max_tokens = 200, 4 echo = False, 5 temperature = -0.3, 1 frames [/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in _create_completion(self, prompt, suffix, max_tokens, temperature, top_p, min_p,...