Igor Schlumberger
Igor Schlumberger
@madsamjp did you try with 0.1.14 that is out now?
@phalexo Ollama 0.1.15 is released. It's worth a try.
Hi @madsamjp Great news. -DLLAMA_CUDA_FORCE_MMQ=on forces the usage of MMQ with the GPU even if GPU driver is not said to be compatible with CUDA; this parameter cannot be included...
works on mac with 0.1.14 with 32GB
I use Ollama 0.1.7 with MacBook M2 32Go MacOS : 13.5.2 (22G91) and cannot reproduce the issue (base) igor@macIgor ~ % ollama pull mistral pulling manifest pulling 6ae280299950... 100% |████████████████████████████████|...
Could you try with version 0.1.20? It could solve the issue
@jukofyork I'm not experiencing errors myself. I've seen that reading many Issue.
Hi @morandalex Can you give more info about the available memory, type of computer, version of Ollama? It works well for me: Last login: Mon Jan 8 18:39:10 on ttys016...
Hi @morandalex Can you try Dolphin Phi ? it's a 2.7B uncensored model, based on the Phi language model by Microsoft Research ```markdown ollama run dolphin-phi ``` You can also...
@morandalex interesting. Can you close the Issue?