Matt Williams
Matt Williams
Hi @m4ttgit, thanks for submitting the issue. Let's see.. First I assume test.png is in that directory right now, correct? Also we are currently on version 0.1.17. There are sometimes...
Ahh, that’s a great thing to point out. We should change that.
Hi @andysingal are you still having an issue or did the answers from @jeanjerome and @rgaidot solve it for you?
Great. Thanks so much for that info.
Hi @oliverbob , thanks for submitting this issue. To read files in to a prompt, you have a few options. First, you can use the features of your shell to...
What OS are you on? You seem to be following the instructions for linux, but the models are being downloaded to the path used on MacOS. If you are using...
What do you mean by regular chat? The multi modal models do a pretty good job with OCR, but they aren't going to be as good as a full OCR...
I don't think this works today, but we can leave this open to track it in case an update is made to support it in the future. Thanks for being...
hi @pramitsawant It looks like the comment from @easp solves your issue. That is, in fact, the right way to achieve this. I'll go ahead and close this issue, but...
Hi thanks for submitting the issue. Ollama doesn't require you to provide a number representing the quantity of tokens to the api. That said each model has a different context...