reor icon indicating copy to clipboard operation
reor copied to clipboard

Error: Error invoking remote method

Open mounta11n opened this issue 11 months ago • 4 comments

Whenever I try to interact with the llm I get the following error:

Error: Error invoking remote method 'create-session': Error: ENOTDIR: not a directory, mkdir '/tmp/.mount_Reor_0Fr46QB/resources/app.asar/node_modules/node-llama-cpp/llama/localBuilds'

mounta11n avatar Mar 03 '24 15:03 mounta11n

Thank you for mentioning this!

  • Could you tell me your system: OS, cpu, RAM, gpu etc.
  • The model you are trying to use.

And if possible, could you run from source and report the error you see in terminal:

Make sure you have nodejs installed.

Clone repo:

git clone https://github.com/reorproject/reor.git

Install dependencies:

npm install

Run for dev:

npm run dev

samlhuillier avatar Mar 03 '24 16:03 samlhuillier

Hello! I got the same ERROR! image

OS: Win11 CPU: intel i5-7500 RAM: 16GB GPU: Nvidia Geforce RTX 2060 with 6GB VRAM and 8GB Shared RAM

Embedding model: bge m3 LLM: mistral-7b-instruct-v0.2.Q5_K_M

I install the app from .exe file provide by this Github repository release.

KingXHJ avatar Mar 12 '24 08:03 KingXHJ

I test Reor on another PC.

OS: Win11 CPU: intel i7-12700H RAM: 32GB GPU: Nvidia Geforce RTX 3070Ti mobile with 8GB VRAM and 16GB Shared RAM

Embedding model: bge m3 LLM: mistral-7b-instruct-v0.2.Q5_K_M

I install the app from .exe file provide by this Github repository release.

If I use default setting, mistral can work well(but slow). If I set "use GPU" and "CUDA", I got the error: Error: Error invoking remote method 'create-session': Error: ENOTDIR, not a directory

KingXHJ avatar Mar 13 '24 03:03 KingXHJ

I'm sincerely appreciate that you provide dev method to watch the logs and finally I solve the problem.

Windows Users need at least CUDA 12.0, reference to node-llama-cpp

But I need highlight CUDA runtime api and driver api !!!

Open the terminal: Command: nvidia-smi. Showing CUDA driver api. It limits the highest version of your runtime api.

Command: nvcc -V. Showing CUDA runtime api. For example, pytorch version choice based on runtime api(command: nvcc -V)

Previously, my drive api is 12.4, my runtime api is 11.8.

When I saw something in the error logs: No CUDA toolset, I tried the method accroding to tiny-cuda-nn . Obviously, it didn't work at all.

Then I open the URL in the error logs, it transfered into official website of node-llama-cpp. I saw it need at least CUDA 12.0. So I installed the latest CUDA 12.4 from Nvidia CUDA official website.

Now in dev reor, I can use GPU and CUDA to communicate with LLM. Next I will try whether installing reor from .exe file can work well.

Very appreciate your greate work, as a student, it helps a lot!!!

KingXHJ avatar Mar 13 '24 12:03 KingXHJ