ollama icon indicating copy to clipboard operation
ollama copied to clipboard

Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.

Results 3345 ollama issues
Sort by recently updated
recently updated
newest added

I'm running Ollama on my mac M1 and I'm trying to use the 7b models for processing batches of questions / answers. I noticed that after a while ollama just...

When i get JSON as a response it seem to be formatted with newlines and spaces. If i want to include the response message in follow-up request it will take...

enhancement

I don't know if this limitation exists with the api. I'm swtiching from openai to ollama api, and with openai I need to calculate token size and subtract it from...

is there a way to keep the model in memory or gpu memory ?

enhancement

can ollama support qwen72b ?

models

I tried both /api/chat and /api/generate endpoints which seem to produce the same results. however I'm getting invalid json on every response.

Hi, I was trying to run my Mixtral model but was not sure how to verify: ``` python app.py * Serving Flask app '__main__' * Debug mode: off WARNING: This...

When an update is available to an already installed model, something like `ollama pull` (without an argument) or `ollama update` would be great!

enhancement

I have a 7900XT and would definitely love to have ROCm support. It seems like it might be coming with https://github.com/jmorganca/ollama/pull/667? I couldn't find a dedicated issue for this so...

enhancement
amd

This is an issue very similar to #845. I was able to get this working on my machine by following the fix described here. However, this fix doesn't get you...