Patrick Devine
Patrick Devine
I tried this on both Mac and Windows 11. I *believe* it's working as intended. The model definitely has some issues with hallucinations :-D
Can you resolve any hosts? Looks like you're having network issues.
Hi @swetavsavarn02 , I'm sorry you're still running into the issue. It's almost certainly an issue with your network setup and not with Ollama. That's why I'd asked you to...
I started on a change for buttons, but haven't gotten too far yet. Buttons really require a notion of "focus" so that you know which button you're actually pushing both...
I did some digging and it looks like nsf/termbox-go, the library that termui uses, doesn't have any notion of "key down" and "key up", only "key pressed". I went ahead...
In PR #257 I just added a variable you can check to see if the widget is currently "Active", although I think maybe it would make sense to call it...
With concurrency you can do this now. Set the `OLLAMA_MAX_LOADED_MODELS` env variable for `ollama serve` to something greater than one. Set the `OLLAMA_KEEP_ALIVE` flag either to a negative number or...
I tested this again in Terminal and iTerm2 and I'm not seeing it in either. I checked iTerm2 and it does have hw acceleration turned on. I'm going to go...
I'm going to go ahead and close this. Models should work w/ hybrid CPU/GPU. If you want to see what portion is offloaded you can now use the new `ollama...