anything-llm
anything-llm copied to clipboard
Chat errors out
Installed successfully on Windows Docker Desktop. Chroma Local Install is being used as Vector DB. Docker build was successful.
Workspace created successfully. Document loaded successful.
However when I try to chat, a message pops up as below...
localhost:3001 says Could not send chat
No other error or notification. Gets no response in chat.
Kindly help.
+1
+1
+1
Hi, I'm experiencing the same error. I'm using Windows 11
+1 using Win 10, set up is working fine, localhost:3002 says "Could not send chat."
I'm on intel mac with same problem - "Could not send chat."
Discovered I did not have all the api fields in the .env file filled out properly.
That fixed the problem for me. Documentation could be improved to make this process more clear.
With all things green, I'm having the same issue with a Mac M2. Everything else supposedly went well.
Unfortunately, I get the same thing after fighting through the installation on a Mac M1.
All things green, I'm having the same issue with a Mac M2. The installation and all worked well.
+1 Mac M1
same here, installed successfully in WSL2 using Docker instructions, all green, uploaded a test doc, now getting cannot send chat error.
yes, for me same message: cannot send chat error.
Looking docker log, result as Internal Server Error
curl --location 'http://localhost:3001/api/workspace/new_workspace/chat'
--header 'Content-Type: text/plain'
--data '{message: "test", mode: "query"}'
Unexpected token m in JSON at position 1 SyntaxError: Unexpected token m in JSON at position 1
at JSON.parse (
I'm getting the same "Could not send chat" on M1 Mac with docker install. All API variables show green in the UI.
I do see the following in the docker log:
PineconeClient: Error calling describeIndex: 404: Not Found [PineconeError: PineconeClient: Error calling describeIndex: 404: Not Found]
and
SELECT * FROM workspaces WHERE slug = 'jkr-anything-test-1'
SELECT * FROM workspace_documents WHERE workspaceId = 1
Request failed with status code 429 Error: Request failed with status code 429
at createError (/app/server/node_modules/axios/lib/core/createError.js:16:15)
at settle (/app/server/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/app/server/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Any suggestions welcome!
ALSO... can someone tell me if the pinecone index name in .env should the short name (e.g. "anything-test") or the long name (e.g. "anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io") shown in the Pinecone console?
Thanks!
I'm getting the same "Could not send chat" on M1 Mac with docker install. All API variables show green in the UI.
I do see the following in the docker log:
PineconeClient: Error calling describeIndex: 404: Not Found [PineconeError: PineconeClient: Error calling describeIndex: 404: Not Found]and
SELECT * FROM workspaces WHERE slug = 'jkr-anything-test-1' SELECT * FROM workspace_documents WHERE workspaceId = 1 Request failed with status code 429 Error: Request failed with status code 429 at createError (/app/server/node_modules/axios/lib/core/createError.js:16:15) at settle (/app/server/node_modules/axios/lib/core/settle.js:17:12) at IncomingMessage.handleStreamEnd (/app/server/node_modules/axios/lib/adapters/http.js:322:11) at IncomingMessage.emit (node:events:525:35) at endReadableNT (node:internal/streams/readable:1359:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21)Any suggestions welcome!
ALSO... can someone tell me if the pinecone index name in .env should the short name (e.g. "anything-test") or the long name (e.g. "anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io") shown in the Pinecone console?
Thanks!
@jkrobin short name. also, I think your error is due to you picking the wrong name, since it seems to be erroring out due to a response from Pinecone and not the ui, if that makes sense
@jkrobin I had the same issue today.
I used the shortname i.e 'anything-test-2aa184a.svc.us-west1-gcp-free.pinecone.io' to 'anything-test' for the index
Thanks to all for your input. My "Could not send chat" error was resolved after I found that my free OpenAI trial credits had expired. Once I set up a paid account on OpenAI the issue was resolved.
Thanks to @torkati44 for the idea. The problem is that OpenAI is not ready to handle the requests, as you do not have enough credits. I switched to a different account, and it started working. On a regular trial account, or even a 20-dollar/month chat account, you are severely rate limited.
| MODEL | RPM | TPM |
|---|---|---|
| CHAT | ||
| gpt-3.5-turbo | 3 | 40,000 |
So it will throw an error. One, that is relatively uninformative, but this is the underlying cause.
Our corporate account on the other hand can do:
| MODEL | RPM | TPM |
|---|---|---|
| CHAT | ||
| gpt-3.5-turbo | 3,500 | 90,000 |
And it works fine with that.
Our rate limits seem fine.
A screenshot from OpenAI platform webpage:
https://platform.openai.com/account/rate-limits
Look what errors we get:
ChromaDB is not going to work with AnythingLLM after all ? Or is it something else that is the root cause of this failure here ?
vectordb: unsupported platform linux_arm64. Please file a bug report at https://github.com/lancedb/lancedb/issues Error: vectordb: unsupported platform linux_arm64. Please file a bug report at https://github.com/lancedb/lancedb/issues
at getPlatformLibrary (/com.docker.devenvironments.code/server/node_modules/vectordb/native.js:25:15)
at Object.
following issue. please help
The root cause is almost always that the .env is not setup properly or is missing values.
latest changes allow you to set .env while app is running (for those who cannot seem to set them on boot for numerous reasons). Also we just improved error messages during chat which should let you know if the service is offline, and API key is invalid, or xyz.