obsidian-Smart2Brain icon indicating copy to clipboard operation
obsidian-Smart2Brain copied to clipboard

Unable to start second brain, console error

Open Npahlfer opened this issue 10 months ago • 11 comments

What happened?

When starting the second brain I receive the error in the screenshot below and it doesn't load. I have tried to clear out the plugin data. And Ollama is running.

Error Statement

image

Smart Second Brain Version

1.0.0

Debug Info

SYSTEM INFO: Obsidian version: v1.5.12 Installer version: v1.4.13 Operating system: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 23.1.0 Login status: not logged in Insider build toggle: off Live preview: on Base theme: adapt to system Community theme: Things v2.1.19 Snippets enabled: 0 Restricted mode: off Plugins installed: 10 Plugins enabled: 6 1: Kanban v1.5.3 2: Excalidraw v2.1.1 3: Advanced Tables v0.21.0 4: Tasks v6.2.0 5: Templater v2.2.3 6: Smart Second Brain v1.0.0

RECOMMENDATIONS: Custom theme and snippets: for cosmetic issues, please first try updating your theme and disabling your snippets. If still not fixed, please try to make the issue happen in the Sandbox Vault or disable community theme and snippets. Community plugins: for bugs, please first try updating all your plugins to latest. If still not fixed, please try to make the issue happen in the Sandbox Vault or disable community plugins.

Npahlfer avatar Apr 04 '24 08:04 Npahlfer

image

Npahlfer avatar Apr 04 '24 08:04 Npahlfer

After clearing out the plugin data and doing the init process a few more times, it started to run again.

Npahlfer avatar Apr 04 '24 08:04 Npahlfer

Even though It works after clearing the plugin data, this shouldn't happen so I will reopen this issue. Can you reproduce it?

Leo310 avatar Apr 04 '24 09:04 Leo310

I have the same issue. It happens every when I start the chat after the vault has been closed. Clearing the plugin data works, but only for the session.

I am using an external Ollama instance and have reproduced the problem with different models (I tried llama3:8b and wizardlm2:7b). I am using nomic-embed-text as embedding model.

As a side note: Every time I try to query my vault (after the I have cleared the plugin and rebuilt the index) I get an error message (same as #68):

Failed to run Smart Second Brain (Error: ,Error: User query is too long or a single document was longer than the context length (should not happen as we split documents by length in post processing).,). Please retry.

I am not sure if this has something to do with the other issue, or if it is unrelated.

jkunczik avatar Apr 22 '24 12:04 jkunczik

Sorry for being absent for a while @Leo310. I can't reproduce it consistently, but it does happen every now and then. I have the error active right now.

Npahlfer avatar Apr 23 '24 06:04 Npahlfer

Thanks for the observation! We will look into it, but it may take a bit as we are currently busy with university unfortunately.

Leo310 avatar Apr 23 '24 19:04 Leo310

Don't worry, I know all about lack of time 😄 Thanks for this project!

I switched to Mixtral 8x7B. With this model, the plugin seems to load without problems.

jkunczik avatar Apr 23 '24 20:04 jkunczik

This happens more frequently if you are syncing vault with iCloud. I did a test with the exact same contents on a non-cloud folders & the occurrence is almost zero.

jymcheong avatar Apr 25 '24 10:04 jymcheong

@jymcheong I can confirm that. Im also syncing my vault with iCloud.

Npahlfer avatar Apr 25 '24 13:04 Npahlfer

I'm also getting this error when using ollama/phi3 and ollama/nomic-embed-text.

Failed to run Smart Second Brain (Error: ,Error: User query is too long or a single document was longer than the context length (should not happen as we split documents by length in post processing).,). Please retry.

I'm guessing it has something to do with context window limits. I lowered the Documents to retrieve to 3 and seem to have gotten past this error.

That said, I'm not getting very good responses. Does anyone know if there's a way to get a deeper view into how the RAG is being performed? For example, it would be nice to see the actual chunks that come back from the vector search and see the final prompt that gets sent to the LLM.

I've recently tried Copilot for Obsidian and Smart Connections, and this project is the most attractive in terms of UX for setup, ease of use, look and feel, etc. I really would love it if i could get this working well on my notes.

jritsema avatar May 30 '24 16:05 jritsema

That said, I'm not getting very good responses. Does anyone know if there's a way to get a deeper view into how the RAG is being performed? For example, it would be nice to see the actual chunks that come back from the vector search and see the final prompt that gets sent to the LLM.

You can get a deeper view of the RAG pipeline by configuring one or both of the following in the plugin settings:

  1. Enable debugging which outputs debug information in the developer console
  2. Provide a Langsmith API token, which sends your runs to the Langsmith API where you can debug your pipeline in a web interface: image

The error is a duplicate of https://github.com/your-papa/obsidian-Smart2Brain/issues/68. We will look into it.

Leo310 avatar Jun 04 '24 09:06 Leo310