jan
jan copied to clipboard
Paste image as alternative to browsing for file
When asking chatGPT vision (or other model supporting images) to look at something, it would be great to take a screenshot and just hit ctrl-v while entering the prompt, rather than saving the file, choosing a name, then browsing again to find it.
Ollama is already supported right?
how to use Ollama for this
I selected mistral and I got this error
https://github.com/stitionai/devika/issues/25
#75 Added LlamaCpp which will allow to load local models, It eliminates the need for separate local LLM inference server