anything-llm icon indicating copy to clipboard operation
anything-llm copied to clipboard

Chat errors out

Open AkashCS opened this issue 2 years ago • 18 comments
trafficstars

Installed successfully on Windows Docker Desktop. Chroma Local Install is being used as Vector DB. Docker build was successful.

Workspace created successfully. Document loaded successful.

However when I try to chat, a message pops up as below...

localhost:3001 says Could not send chat

No other error or notification. Gets no response in chat.

Kindly help.

AkashCS avatar Jun 14 '23 04:06 AkashCS

Anything-LLM-Error

AkashCS avatar Jun 14 '23 04:06 AkashCS

+1

lifuzu avatar Jun 14 '23 05:06 lifuzu

+1

danglingptr0x0 avatar Jun 14 '23 18:06 danglingptr0x0

+1

rewphus avatar Jun 15 '23 02:06 rewphus

Hi, I'm experiencing the same error. I'm using Windows 11

veramarvin avatar Jun 15 '23 10:06 veramarvin

+1 using Win 10, set up is working fine, localhost:3002 says "Could not send chat."

dennisnadeau avatar Jun 15 '23 10:06 dennisnadeau

I'm on intel mac with same problem - "Could not send chat."

Discovered I did not have all the api fields in the .env file filled out properly.

That fixed the problem for me. Documentation could be improved to make this process more clear.

sigpoggy avatar Jun 15 '23 21:06 sigpoggy

With all things green, I'm having the same issue with a Mac M2. Everything else supposedly went well.

kolenyo2099 avatar Jun 16 '23 11:06 kolenyo2099

Unfortunately, I get the same thing after fighting through the installation on a Mac M1.

tabgab avatar Jun 16 '23 22:06 tabgab

All things green, I'm having the same issue with a Mac M2. The installation and all worked well.

yaronsivan avatar Jun 18 '23 07:06 yaronsivan

+1 Mac M1

subarudad avatar Jun 18 '23 09:06 subarudad

same here, installed successfully in WSL2 using Docker instructions, all green, uploaded a test doc, now getting cannot send chat error.

jmmathieu avatar Jun 18 '23 17:06 jmmathieu

yes, for me same message: cannot send chat error.

Looking docker log, result as Internal Server Error curl --location 'http://localhost:3001/api/workspace/new_workspace/chat'
--header 'Content-Type: text/plain'
--data '{message: "test", mode: "query"}'

Unexpected token m in JSON at position 1 SyntaxError: Unexpected token m in JSON at position 1 at JSON.parse () at reqBody (/app/server/utils/http/index.js:9:12) at /app/server/endpoints/chat.js:11:43 at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5) at next (/app/server/node_modules/express/lib/router/route.js:144:13) at Route.dispatch (/app/server/node_modules/express/lib/router/route.js:114:3) at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5) at /app/server/node_modules/express/lib/router/index.js:284:15 at param (/app/server/node_modules/express/lib/router/index.js:365:14) at param (/app/server/node_modules/express/lib/router/index.js:376:14)

rhafiko avatar Jun 19 '23 03:06 rhafiko

I'm getting the same "Could not send chat" on M1 Mac with docker install. All API variables show green in the UI.

I do see the following in the docker log:

PineconeClient: Error calling describeIndex: 404: Not Found [PineconeError: PineconeClient: Error calling describeIndex: 404: Not Found]

and

SELECT * FROM workspaces WHERE slug = 'jkr-anything-test-1'
SELECT * FROM workspace_documents WHERE workspaceId = 1 
Request failed with status code 429 Error: Request failed with status code 429
    at createError (/app/server/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/server/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/server/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (node:events:525:35)
    at endReadableNT (node:internal/streams/readable:1359:12)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

Any suggestions welcome!

ALSO... can someone tell me if the pinecone index name in .env should the short name (e.g. "anything-test") or the long name (e.g. "anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io") shown in the Pinecone console?

Thanks!

jkrobin avatar Jun 21 '23 16:06 jkrobin

I'm getting the same "Could not send chat" on M1 Mac with docker install. All API variables show green in the UI.

I do see the following in the docker log:

PineconeClient: Error calling describeIndex: 404: Not Found [PineconeError: PineconeClient: Error calling describeIndex: 404: Not Found]

and

SELECT * FROM workspaces WHERE slug = 'jkr-anything-test-1'
SELECT * FROM workspace_documents WHERE workspaceId = 1 
Request failed with status code 429 Error: Request failed with status code 429
    at createError (/app/server/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/server/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/server/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (node:events:525:35)
    at endReadableNT (node:internal/streams/readable:1359:12)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

Any suggestions welcome!

ALSO... can someone tell me if the pinecone index name in .env should the short name (e.g. "anything-test") or the long name (e.g. "anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io") shown in the Pinecone console?

Thanks!

@jkrobin short name. also, I think your error is due to you picking the wrong name, since it seems to be erroring out due to a response from Pinecone and not the ui, if that makes sense

danglingptr0x0 avatar Jun 21 '23 20:06 danglingptr0x0

@jkrobin I had the same issue today.

I used the shortname i.e 'anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io' to 'anything-test' for the index

ajosegun avatar Jun 22 '23 14:06 ajosegun

Thanks to all for your input. My "Could not send chat" error was resolved after I found that my free OpenAI trial credits had expired. Once I set up a paid account on OpenAI the issue was resolved.

jkrobin avatar Jun 22 '23 15:06 jkrobin

Thanks to @torkati44 for the idea. The problem is that OpenAI is not ready to handle the requests, as you do not have enough credits. I switched to a different account, and it started working. On a regular trial account, or even a 20-dollar/month chat account, you are severely rate limited. 

MODEL RPM TPM
CHAT
gpt-3.5-turbo 3 40,000

So it will throw an error. One, that is relatively uninformative, but this is the underlying cause.

Our corporate account on the other hand can do:

MODEL RPM TPM
CHAT
gpt-3.5-turbo 3,500 90,000

And it works fine with that.

tabgab avatar Jun 23 '23 10:06 tabgab

Our rate limits seem fine. A screenshot from OpenAI platform webpage: https://platform.openai.com/account/rate-limits image

Look what errors we get: image

ChromaDB is not going to work with AnythingLLM after all ? Or is it something else that is the root cause of this failure here ?

pligor avatar Jun 24 '23 15:06 pligor

vectordb: unsupported platform linux_arm64. Please file a bug report at https://github.com/lancedb/lancedb/issues Error: vectordb: unsupported platform linux_arm64. Please file a bug report at https://github.com/lancedb/lancedb/issues at getPlatformLibrary (/com.docker.devenvironments.code/server/node_modules/vectordb/native.js:25:15) at Object. (/com.docker.devenvironments.code/server/node_modules/vectordb/native.js:33:21) at Module._compile (node:internal/modules/cjs/loader:1256:14) at Module._extensions..js (node:internal/modules/cjs/loader:1310:10) at Module.load (node:internal/modules/cjs/loader:1119:32) at Module._load (node:internal/modules/cjs/loader:960:12) at Module.require (node:internal/modules/cjs/loader:1143:19) at require (node:internal/modules/cjs/helpers:110:18) at Object. (/com.docker.devenvironments.code/server/node_modules/vectordb/dist/index.js:29:124) at Module._compile (node:internal/modules/cjs/loader:1256:14)

following issue. please help

shrashansh avatar Jun 24 '23 22:06 shrashansh

The root cause is almost always that the .env is not setup properly or is missing values.

latest changes allow you to set .env while app is running (for those who cannot seem to set them on boot for numerous reasons). Also we just improved error messages during chat which should let you know if the service is offline, and API key is invalid, or xyz.

timothycarambat avatar Jun 27 '23 00:06 timothycarambat