7shi
7shi
Thank you for your suggestion about adjusting the temperature parameter. I understand and respect your approach. However, I'd like to clarify the current critical issue: When Ollama enters an infinite...
I agree with removing the fallback mechanism. Clear error messages would be more helpful for troubleshooting. Currently, when OllamaTranslator raises an error, the system keeps retrying indefinitely. I understand this...
I modified the sample code in README. llama_cpp_python: 0.2.90 ```py import llama_cpp import ctypes llama_cpp.llama_backend_init(False) # Must be called once at the start of each program lparams = llama_cpp.llama_context_default_params() mparams...