obsidian-copilot icon indicating copy to clipboard operation
obsidian-copilot copied to clipboard

Langchain fetch error

Open iukea1 opened this issue 1 year ago • 16 comments

Describe the bug When I try to chat in or the QA chat / indexing I keep on getting a lang chain fetch error

My setup

  • windows WSL2 running Olloma
  • GPUs are Nvidia A6000 ads (2)
  • obsidian is running on the window side of my system
  • I can see logs of embeddings being done in Olloma langchain.

The error only shows in the obsidian UI

Preventing any text generation from happening.

iukea1 avatar Mar 22 '24 23:03 iukea1

Some extra context

Langchain Python library is installed on where the models are being ran from.

iukea1 avatar Mar 22 '24 23:03 iukea1

Getting the same error on Ollama and on LM Studio as well. Looks like by default the model name is set to 3.5 in the request and is not changing even after switching the model. I have both mistral and llama2 on my local Request from LM Studio log as below: [2024-03-23 14:34:51.801] [INFO] Received POST request to /v1/chat/completions with body: { "model": "gpt-3.5-turbo", "temperature": 0.1, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "stream": true, "messages": [ { "role": "system", "content": "You are Obsidian Copilot, a helpful assistant that integrates AI to Obsidian note-taking." }, { "role": "user", "content": "Hello" } ] } [2024-03-23 14:34:51.802] [ERROR] Model with key 'gpt-3.5-turbo' not loaded.

tanmay-priy avatar Mar 23 '24 09:03 tanmay-priy

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI.

logancyang avatar Mar 26 '24 19:03 logancyang

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI. Getting the same error on Ollama as well. image

I enable the debug mode in the setting, but I don't know how to see and check the log.

istarwyh avatar Mar 30 '24 08:03 istarwyh

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI.

Should add to my post I am using Ollama to serve the API not LM studio

iukea1 avatar Apr 01 '24 01:04 iukea1

I encountered the same issue, but turn out that my own mistakes have been the cause. I shall share my own experience here.

I'm using Windows PowerShell to start Ollama. In fact, you need to $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve in powershell; or set OLLAMA_ORIGINS=app://obsidian.md* in cmd. Remember Linux style OLLAMA_ORIGINS=app://obsidian.md* ollama serve won't work. Simply copying statements to your terminal just isn't enough. This is actually mentioned in the repo local_copilot.md, but somehow the instructions are not clear enough when seen in Obsidian.

LieZiWind avatar Apr 03 '24 16:04 LieZiWind

I encountered the same issue, but turn out that my own mistakes have been the cause. I shall share my own experience here.

I'm using Windows PowerShell to start Ollama. In fact, you need to $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve in powershell; or set OLLAMA_ORIGINS=app://obsidian.md* in cmd. Remember Linux style OLLAMA_ORIGINS=app://obsidian.md* ollama serve won't work. Simply copying statements to your terminal just isn't enough. This is actually mentioned in the repo local_copilot.md, but somehow the instructions are not clear enough when seen in Obsidian.

Trying this out tonight. Thank you

iukea1 avatar Apr 12 '24 03:04 iukea1

I conducted commands accordingly, and several local models pulled already, but Obsidian Copilot says ### "do not find llama2, please pull it first".

C:\Users\adam>set OLLAMA_ORIGINS=app://obsidian.md*

C:\Users\adam>ollama serve time=2024-04-19T04:28:36.174+08:00 level=INFO source=images.go:817 msg="total blobs: 35" time=2024-04-19T04:28:36.178+08:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-19T04:28:36.179+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)" time=2024-04-19T04:28:36.181+08:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\adam\AppData\Local\Temp\ollama3461672080\runners time=2024-04-19T04:28:36.459+08:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" [GIN] 2024/04/19 - 04:28:59 | 204 | 20.2µs | 127.0.0.1 | OPTIONS "/api/chat" [GIN] 2024/04/19 - 04:28:59 | 404 | 771.5µs | 127.0.0.1 | POST "/api/chat" [GIN] 2024/04/19 - 04:28:59 | 204 | 0s | 127.0.0.1 | OPTIONS "/api/generate" [GIN] 2024/04/19 - 04:28:59 | 404 | 932.8µs | 127.0.0.1 | POST "/api/generate" [GIN] 2024/04/19 - 04:31:13 | 404 | 584.7µs | 127.0.0.1 | POST "/api/chat" [GIN] 2024/04/19 - 04:31:13 | 404 | 931.9µs | 127.0.0.1 | POST "/api/generate"

adamchentianming1 avatar Apr 18 '24 20:04 adamchentianming1

Same

ryoppippi avatar Apr 23 '24 08:04 ryoppippi

same error

Gitreceiver avatar Apr 26 '24 12:04 Gitreceiver

@ryoppippi @Gitreceiver

I got it all working . Are you guys running on windows WLS?

iukea1 avatar May 11 '24 11:05 iukea1

I'm using sonoma

ryoppippi avatar May 11 '24 11:05 ryoppippi

In case someone using fish-shell too, I fixed this issue by set variable OLLAMA_ORIGINS before run ollama serve:

set -gx OLLAMA_ORIGINS 'app://obsidian.md*'

ihomway avatar May 11 '24 11:05 ihomway

Can someone help me? I'm on mac sonoma, I'm using iterm2 and none of the answers above seem to work. I still get the lang chain fetch error

Heptamelon avatar May 12 '24 03:05 Heptamelon

I get the same error - W10 in PowerShell in Windows Terminal, and Ollama server seems to be working using $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve , but I also get Langchain fetch error from Obsidian Copilot when I try to connect. Ollama does seem to be listening: time=2024-05-14T19:47:01.064+01:00 level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.37)", and browsing to http://127.0.0.1:11434 gives "Ollama is running". Ollama works normally in a shell, without using the server, so the model is working.

deeplearner5 avatar May 14 '24 19:05 deeplearner5

This worked for me on Linux with systemd in ollama.service instead of app://

Change OLLAMA_ORIGINS="app://obsidian.md*"

To

OLLAMA_ORIGINS="*"

If running as a service and wanting to run manually with ollama serve then stop the service. Anyone know why app:// is recommended? Is it a flatpak thing or a Mac thing?

Anyways try * on its own.

duracell80 avatar Jun 20 '24 22:06 duracell80