Stefan Georg Beck
Results
3
comments of
Stefan Georg Beck
Same problem here ... using GeForce RTX 4060 8MB. Ollama from llama-index
I could solve the error by using the ollama library for python directly. No the llamaindex-ollama one. You can also play with the ollama installation on terminal.
Just use llama-cpp instead of ollama. I switched and it works.