Patrick Devine
Patrick Devine
We just got an A6000 Ada to test against our 4090s. Definitely agree it's a much better card, but it's also $3k more expensive :-D
@ahmeteid7 You can definitely use Ollama w/ cloud services such as AWS / Google Cloud / etc. You will have to have an account with one of those services and...
We definitely need to make it easier for users to not have to wade through dozens of potential tags with each of the different quantizations. I have been thinking of...
@FotieMConstant thanks for being persistent here and sorry about not updating the issue. There have been a number of changes for the `create` command which should make it somewhat easier...
Hey @StefanDanielSchwarz . I'm sorry this issue got buried in the avalanche of issues! Thanks so much for taking the time to file it. I've updated each of the 8x7b...
Oh, also, I should note that if you pull the model again with `ollama pull mixtral` it will *only* download the new template and it won't pull the weights again.
`codellama:70b-instruct` should be working fine right now. I know the issue was from a while ago, but let's go ahead and close it. We can reopen if you're still seeing...
This is implemented with #3094 . It should be available in release 0.1.29.
@insooneelife what did you set the `OLLAMA_HOST` variable to when starting `ollama serve`? It should be set to `OLLAMA_HOST=0.0.0.0:11434`
> Now it serves on 0.0.0.0 everytime Ollama starts. Would have been great if there was some configuration method at installation or something but considering it is in preview, this...