Bruce MacDonald
Bruce MacDonald
I am happy to see capturing errors here, but I'm not sure how to test this yet. Relaying errors on model load is handled by the exception catching logic here:...
Hi @heimu-liu this seems similar to #1149, are you using ollama behind a proxy? If so checkout the proxy configuration guide: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy Hopefully that helps, let me know if that...
Just a note that if you're on linux you probably need to add the environment variable to the background service, rather than in your current terminal session: https://github.com/ollama/ollama/blob/main/docs/linux.md#adding-ollama-as-a-startup-service-recommended
> Thank you for creating this! So awesome. Possible to list the graphics cards by name instead of having people search on their own. I would rather list them too,...
I took a shot at creating a table of supported nvidia gpus
Thanks for doing this @mann1x, this looks good. There's another ongoing PR that moves some of this stuff around (#3682) which is going in soon, so I'll get this merged...
@sammcj I've rebased these changes onto the new structure in main in #4322, hoping to get it merged for `v0.1.36`. Thanks for bringing this to our attention originally. Closing this...
Hi @jl-codes looks like this is a scaffold to build on rather than a full UI project correct? This feels like it could better under the `Extensions and Plugins` sections...
Thanks for the feedback @this-josh, there are some techniques that I can apply that should greatly improve general question performance. This is on the short-term road map.
Hi @masonjames, thanks for the kind words. Yes, creating a custom modelfile with a different system file would work. If you're just using it locally you can use `ollama create`...