Blake Mizerany
Blake Mizerany
Could you please provide the model your using? Will you also please try to reproduce with the ollama cli and see if you see the same problem?
@nextdimension Thank you for the ticket. If you can reproduce with the latest version of Ollama, please feel free to reopen, but I'll close this for now. FWIW: It sounds...
CI is failing due to packages needing updates. I'll address those failure once we're happy with the new package API. [Edit]: Changes are in.
I'm unable to reproduce with the latest version of Ollama. I'm going to close this for now, but please reopen if the issues persists. My output using your provided Modelfile...
I can see how that would be frustrating, @oldmanjk. There is more discussion about this going on in #3622. We'll keep investigating.
@Blu-Eagle llama2 is a general purpose model so it will likely respond with some length hard to parse response. Have you tried tuning your prompt, temperature, tokens expected, etc to...
Closing for now. Please reopen with a complete code example and environment information (including ollama version) if the issue persists.
Hi! We're actively working on fixing issues with regard to slow downloads. We'll continue to improve as we release new versions of Ollama. Thank you for the ticket!
What problem does this fix and why? It would be nice to have commit messages that explain these things when it isn't obvious in the patch alone.
What should this resolve to in a filepath? I propose `host%port/..`