quivr icon indicating copy to clipboard operation
quivr copied to clipboard

[Bug]: Not getting chat response when running Quivr on WSL local machine?

Open quic-khrahul opened this issue 1 year ago • 6 comments

What happened?

I am using WSL on my windows machine to locally run the Quivr app.

I am able to successfully launch the project using below instructions: #docker compose pull #docker compose -f docker-compose-ollama.yml up --build

In http://localhost:3000/chat. When I am trying to chat, the app is not responding. Am I missing something? Or this application won't work on WSL? I am not getting any error as such.

Thanks in Advance.

Relevant log output

No response

Twitter / LinkedIn details

No response

quic-khrahul avatar Jan 02 '24 09:01 quic-khrahul

🤖

Hello @quic-khrahul! I'm Dosu, an AI assistant here to help you with your Quivr issues while we wait for a human maintainer. I can assist you with bug troubleshooting, answer any questions you might have, and even guide you on how to contribute. Nice to meet you!

Based on the information you've provided, there are a couple of potential reasons why you might not be receiving a chat response when running Quivr on your WSL local machine.

One possibility is that the environment variables are not being loaded correctly. Quivr uses the load_dotenv() function to load environment variables from a .env file. If this file is not in the same directory as the script or is not accessible, the environment variables will not be loaded and the bot will not function correctly. You can add some print statements to check if the environment variables are loaded correctly and if the bot is receiving any messages. Here is how you can do it:

# After loading the environment variables, print them to check if they are loaded correctly
print(f"Telegram Bot Token: {telegram_bot_token}")
print(f"Quivr Token: {quivr_token}")
print(f"Quivr Chat ID: {quivr_chat_id}")
print(f"Quivr Brain ID: {quivr_brain_id}")
print(f"Quivr URL: {quivr_url}")

# In the handle_message function, print the user message to check if the bot is receiving any messages
print(f"User Message: {user_message}")

Please make sure to remove these print statements or replace them with proper logging once you have finished debugging, as they can expose sensitive information. You can find this in the connectors/telegram_bot/main.py file.

Another possibility is that the bot is not receiving any messages. This could be due to the bot not being properly registered with Telegram, or the chat ID used is not correct.

As for your question about system requirements, Quivr does not have any specific system requirements that might not be met by the WSL environment. The Dockerfile provided in the context shows that Quivr is built on a Node.js environment and uses Python, make, g++, and git, all of which can be installed and run in a WSL environment. The application is containerized using Docker, which is also supported in WSL. You can find more details in the cms/quivr/Dockerfile file.

I hope this helps! If you have any other questions or need further clarification, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Jan 02 '24 09:01 dosubot[bot]

add any openai api key even though ur not using. this worked for me.

sumitsodhi88 avatar Jan 02 '24 20:01 sumitsodhi88

I have the same problem. At first it worked just fine with default brain but when I created one it started to give blank responses despite the brain you asked.

enriqueramosg avatar Jan 04 '24 09:01 enriqueramosg

add any openai api key even though ur not using. this worked for me.

I tried adding openai api, it is still not working. "Error: you exceeded your current quota, please check your plan and billing details". It is a new OpenAI account, But the openai key never worked. Why it is asking for OpenAI key, If I am using llama2?

rahulkhandelwal456 avatar Jan 05 '24 16:01 rahulkhandelwal456

I had to modify the "Maximum Words per Response" value on this page, click save then it worked for me. http://localhost:3000/brains-management/BRAIN_ID

mikestaub avatar Jan 06 '24 12:01 mikestaub

I had to modify the "Maximum Words per Response" value on this page, click save then it worked for me. http://localhost:3000/brains-management/BRAIN_ID

I tried changing "Maximum Words per Response" as well, but still I am getting blank response.

backend-core | 2024-01-08 07:31:38,486:INFO - HTTP Request: GET http://kong:8000/rest/v1/customers?select=email&email=eq.rahul.com "HTTP/1.1 500 Internal Server Error" backend-core | 2024-01-08 07:31:38,486 [INFO] models.databases.supabase.user_usage: None backend-core | 2024-01-08 07:31:38,486 [ERROR] models.databases.supabase.user_usage: Error while checking if user is a premium user backend-core | 2024-01-08 07:31:38,486 [ERROR] models.databases.supabase.user_usage: {'code': 'XX000', 'details': None, 'hint': None, 'message': 'called Result::unwrap() on an Err value: InvalidPosition'}

Do I need to register somewhere else as well?

Thanks,

rahulkhandelwal456 avatar Jan 08 '24 07:01 rahulkhandelwal456

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.

github-actions[bot] avatar Apr 07 '24 08:04 github-actions[bot]