Fox Buchele

Results 3 comments of Fox Buchele

A few weeks ago they changed the instructions (and the environment variable) needed to use CUDA. It used to be '-DLLAMA_CUBLAS=ON' and now it's '-DLLAMA_CUDA=on'. Try uninstallying llama-cpp-python, adding the...

I stumbled upon this issue while debugging a separate issue in a different repository. After updating several packages, including Guidance (which depends on llama-cpp-python) I noticed my Mistral Instruct responses...

Does it behave any differently if you add the @guidance decorator to your tools? Everything else looks fine to me on a first glance.