ollama icon indicating copy to clipboard operation
ollama copied to clipboard

Running on Windows Docker vs WSL versions

Open ewebgh33 opened this issue 2 years ago • 6 comments

Hi Not really an issue, unless you include "more explanation" as an issue. But you guys don't have a discussion section here on Github.

Is there any benefit (or con) to running in WSL vs Docker for Windows? (as still no sign of a Win version coming).

I am leaning towards WSL simply because I've had issues in the past trying to get non-docker LLM apps to communicate with docker apps and vice versa.

Docker seems simple, but the instructions aren't specific to windows, are they? Otherwise wouldn't the Docker version count as this app being available for windows (which the main page still says is coming soon)?

Will it be any slower or faster in docker? I have also heard via WSL will use less VRAM.

Where do models get downloaded to, if we're running in either? Can we point the docker version or the WSL version to a common repo of LLM models on our drive locally?

Many other LLM apps "require" Ollama as their backend, so I really hope to start using this soon. I have both docker desktop and WSL/Ubuntu installed already.

If I have another LLM app, say, Cheshire Cat AI, already running in docker, maybe I would be better off running the dockerised Ollama. But then other LLM apps that do NOT run in docker, also want it. Not sure what option is going to give me the more simple setup in the long run.

Thanks!

ewebgh33 avatar Jan 10 '24 04:01 ewebgh33

Hi there. I'm on Win11, wsl2, docker. I've been using a lot wsl2, doing things straight inside it It worked for a while, but with time, it got pretty ugly. I liked to try every AI project and each had own version requirements for some common package. When I was updating one, often an upgrade was done, which, in turn, blow the others. And so on. I started to make intensive use of miniconda (TGWUI came with it by default), but still had minor issues.

Then I started to use Docker. And besides other unforseen problems which poped out but weree workable (increase host ram allocated to docker, swap space, network accessibility between containers, common place to store LLMs, etc), I am declaring now happy. No more hustle. An I'm wondering why others not use it 😉

mongolu avatar Jan 10 '24 06:01 mongolu

And actually host ram, swap space, are directly related to wsl2, not to docker.

mongolu avatar Jan 10 '24 06:01 mongolu

Thanks for this @dcasota For me, pretty much the ONLY reason to use WSL is that Docker is not yet windows-friendly, so I'm not too worried about separate linux environments. I actually doubt I'll be using WSL/Ubuntu for anything else.

For all the other stuff I do, I mainly use conda environments, and occasionally Docker on windows, to keep things separate.

I got Ollama running yesterday via WSL, so this looks OK so far.

But I'm still hazy on where to put models or if we can point Ollama to a folder of already-downloaded models on a local drive somewhere. Every LLM seems to want their models in their own special location and there's a ton of duplication going on right now with my model files! :)

ewebgh33 avatar Jan 10 '24 23:01 ewebgh33

The root cause is every install of every LLM app doesn't have an easy way to direct itself to a folder specified by the user... ? Anyway we're off topic now I suppose I'll go search for a clear answer on where the models are downloaded to and if/how we can direct Ollama to look in a folder of our choosing.

ewebgh33 avatar Jan 11 '24 02:01 ewebgh33

What are you even talking about? Are you a troll? You're speaking words that have nothing to do with the intent of my original question.

ewebgh33 avatar Jan 11 '24 12:01 ewebgh33

3. "if/how we can direct Ollama to look in a folder of our choosing" I would call this feature as distributed storage solution. It is a well-known feature in data centre environments.

Datacenter? Where did anyone mention data centre. A folder of our choosing = a folder on a local drive, dude. A folder with .safetensor models in it, for example. Turns out we can't do it, I've learned elsewhere, no thanks to these confusing replies.

Maybe english isn't your language, I could understand miscommunication then.

ewebgh33 avatar Jan 12 '24 06:01 ewebgh33

Ultimately it's up to the users preference. We now have a native windows version, but still support WSL2 if that's what users want to run. They come with their own trade-offs, but if you're on Windows, I'd suggest the native windows version as the preference.

dhiltgen avatar Mar 12 '24 18:03 dhiltgen

I installed ollama on the windows version, how can I have wsl point to it?

piranhap avatar Mar 13 '24 01:03 piranhap

I installed ollama on the windows version, how can I have wsl point to it?

@piranhap You don't, you either use the WSL version of Ollama or the Windows version of Ollama. One doesn't point to the other.

ewebgh33 avatar Mar 13 '24 06:03 ewebgh33

@piranhap WSL2 has its own network identity, so "localhost" is different from the host windows "localhost". By default we only expose Ollama to localhost (127.0.0.1:11434) but you can expose it on other addresses via the OLLAMA_HOST variable. (Be careful not to expose on an open/untrusted network) Check out the https://github.com/ollama/ollama/blob/e72c567cfd18a6e48de0acaeda60896af4bff3fd/docs/faq.md for more information.

dhiltgen avatar Mar 13 '24 15:03 dhiltgen