LOCAL model can't read Long Notes
Describe how to reproduce Both Chat and Long Note QA work great with LOCAL OLLAMA. However when I try local model on a long note, it ignores it, and comes back to the default prompt about Obsidian and AI
Expected behavior Answer questions about the active note
Screenshots
| CHAT MODE | LONG NOTE QA |
|---|---|
Additional context I believe the difference is the note length because it is the obvious one (it has 1 000 000 char and 1.2MB file size!), but I am not sure about it
Consumer-grade graphics cards cannot support the complete context token of the local model
@fabiensc0ville have you checked the doc and set the context window explicitly in Ollama? https://github.com/logancyang/obsidian-copilot/blob/master/local_copilot.md#ollama
Also you didn't mention which model you were using, how long is its context window?
@fabiensc0ville have you checked the doc and set the context window explicitly in Ollama? https://github.com/logancyang/obsidian-copilot/blob/master/local_copilot.md#ollama
Also you didn't mention which model you were using, how long is its context window?
I have tested it, using [[file]] can send the file content to the LLM, but using the button to send the file content to the LLM does not work.