Matt Williams
Matt Williams
When you run `ollama serve` on the command line, you are running as your user. Then your models will be in ~/.ollama. When you described your issue at the top,...
Interesting, I tried out this code: ``` async function test() { const body = { "model": "mistral", "prompt": "list 3 synonyms for a sink", "stream": false, "options": { "seed": 12345,...
Thanks @oderwat for checking. I know @BruceMacD is looking into it. We added a bug label to the issue so we will continue investigating.
Thanks for submitting this issue, @gerroon . Did the comment from @xprnio solve your problem?
Hi @gonnaK , thanks for submitting this issue. Just want to make sure you are on the same machine, correct? What happens if you make the same call using curl...
That is interesting. What sdk are you using? I can take a closer look.
Hi @PriyaranjanMaratheDish, thanks for submitting this issue. It sounds like you want to use a model that has been fine-tuned on data you have produced somewhere else. This is something...
that will work if `Source_Data_File` is a modelfile as described here: https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md
Looking at the config file ([config.json · RWKV/rwkv-5-world-1b5 at main](https://huggingface.co/RWKV/rwkv-5-world-1b5/blob/main/config.json)), it looks like its an architecture not supported by llama.cpp, and thus we are not able to support it yet....
Hi, thanks for the issue. If the file fits in the context of the model, then you can do something like this today: ``` ollama run llama2 "can you rewrite...