spiazzi
spiazzi
Hi all, Hi, let me propose a change like this to read from config file a path of an html/js taken from a path, instead of a hardcoded js: #include...
Hi all, I started and run ollama with Mac mini M2. It is intensive with llama2 and Mistral. I can suggest phi as LLM to test it. Let's check implementation...
Based on https://github.com/jmorganca/ollama-python Could be replaced mistral client with Ollama one. Like from ollama import Client client = Client(host='http://localhost:11434') response = client.chat(model='llama2', messages=[ { 'role': 'user', 'content': 'Why is the...
On a shell run ollama run phi Then try with model phi and not mistral. This at.least to test with little model.
Linux