Igor Schlumberger
Igor Schlumberger
I loaded lava 7b with version 0.1.32 and I get a good result with this image: ollama run llava:7b >>> can you give me the full text of this image...
@nethriis can you please share the prompt if it's a special one. Easier to try to reproduce. Did you try with another LLM like ollama run tinydolphin It will help...
I think it could be nice to use Ollama sources to run LLM's as Ollama is well maintened. I think it would be good also if we could plug Ollama...
If there is a way to setup ChatGPT as IA tool, it should be possible to use Ollama as its API are compatible with ChatGPT.
@Maltz42, what behavior were you expecting? To run a 236B model, you would need at least 236GB of VRAM on your system. If you're encountering an out-of-memory error, that's expected...
@Maltz42 Your're write for the fall back.How much RAM do you have on your computer. I have a Mac Station with 192GB and could not run larger than Llama3.1:405b-instruct-q2_K?
Hi @chigkim, It sounds like you're facing some challenges with running the Llama3.1:405b_q2 model on your Mac with 64GB of RAM. Based on the requirements for this model, you would...
@chigkim I'm french and my English is not so good and I use LLM to help me understand issues and help me write answers. I have a 192GB MacStation, but...
I pull Llama3.1:205b q2_K and Q3_K_S on my mac Station. I could run both of them, but not the q3_K_M where a got an error because not enough memory. If...
@Amazon90 I don't think that Ollama supports image generation LLM because the output is only text unless Ollama outputs base64 encoded images.