Michael
Michael
Hi all, this should be fixed in the recent builds of ollama (0.1.28+). If you have older models, we might have changed the prompt templates to help prompt it better...
Thank you for sharing this. We will be working to have more sane defaults from ollama.com; sorry about this.
@SerhiyProtsenko are you still running into issues with this with the latest version of Ollama, and updating the model via an `ollama pull`? Just wanted to make sure it's not...
@amnweb thank you so much for this. We definitely do try to clean up the tmp files on exit. This is definitely a bug. Sorry!
Hey! One of the Ollama users have this model uploaded: https://ollama.com/ifioravanti/lwm Give it a try!
Thanks @icebaker possible to say it's Nano Bots for VSCode, Sublime Text, and Obsidian. I just don't want to cause user confusion that it's a direct integration from the respective...
wanted to see if anyone is still running into this issue with ollama v0.1.22
Hey @heiheiheibj we're working on better displaying embedding models within Ollama. You can currently search: https://ollama.com/search?q=embedding&p=1
hey @quantumalchemy, this should work with Ollama. What you would need to do is forward to `localhost:11434` for Ollama.
Hi @tyseng92 so sorry about this. Do you have more information on this to help us troubleshoot? What are your system specs (RAM, GPU, amount of VRAM, which video driver,...