Patrick Devine
Patrick Devine
@PriyaranjanMaratheDish Sorry about the slow response. There actually is a document [here](https://github.com/ollama/ollama/blob/main/docs/import.md) which explains how to convert/quantize models and pull them into Ollama. The doc @technovangelist mentioned is also useful...
This is merged now, so I'm going to go and and close the issue. You'll be able to use this in `0.1.23`.
Going to close the issue.
@imagebody Can you attach the server logs? What type of system are you running on and how much memory is there?
Hey @RedemptionC, sorry for the slow response. Please refer to the [troubleshooting guide](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) which explains how to get the logs for each platform (mac/linux/windows/docker).
Hi @duyaofei , sorry about the slow response. I just pulled `yi:34b-chat-q4_K_M` (a045fcc68517) which is working with the most up to date version of ollama (0.1.28). You should be able...
Hey guys, sorry about the slow response. The `FROM` line can take 3 different forms of input: 1. a model name; 2. a path to a GGUF file; or 3....
If there's enough support we can look at pulling lwm into the official models, but definitely give the other one a try. As for video models, there aren't any currently...
Oh interesting... I haven't looked at that model. I didn't realize it was multi-modal.
@TechScribe-Deaf The behaviour right now is to unload after 5 minutes. That was really a compromise because some people want it to unload immediately, and others want it to never...