[BUG]: When deploying AnythingLLM using docker-compose.yml and accessing directly on port 3001 without using an Nginx proxy, the error message 'Could not respond to message. An error occurred while streaming response. network error'
How are you running AnythingLLM?
Docker (remote machine)
What happened?
When deploying AnythingLLM using docker-compose.yml and accessing directly on port 3001 without using an Nginx proxy, the error message 'Could not respond to message. An error occurred while streaming response. network error'
docker-compose.yml
version: '3.8'
services:
anythingllm:
image: mintplexlabs/anythingllm
container_name: anythingllm
ports:
- "3001:3001"
cap_add:
- SYS_ADMIN
user: "${UID}:${GID}"
environment:
# Adjust for your environment
- STORAGE_DIR=/app/server/storage
env_file:
- .env
volumes:
- ./data:/app/server/storage
- ./.env:/app/server/.env
restart: always
networks:
- anything-llm
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
anything-llm:
driver: bridge
.env
LLM_PROVIDER='ollama'
EMBEDDING_MODEL_PREF='nomic-embed-text:latest'
OLLAMA_BASE_PATH='http://192.168.31.2:11434'
OLLAMA_MODEL_PREF='deepseek-r1:7b'
OLLAMA_MODEL_TOKEN_LIMIT='4096'
EMBEDDING_ENGINE='ollama'
EMBEDDING_BASE_PATH='http://192.168.31.2:11434'
EMBEDDING_MODEL_MAX_CHUNK_LENGTH='8192'
JWT_SECRET='make this a large list of random numbers and letters 20+'
STORAGE_DIR='/app/server/storage'
Error screenshot
Are there known steps to reproduce?
No response
All of this looks right at first glance, can you pull the container logs during the error? This should give an indication as to what is going wrong throwing that error.
All of this looks right at first glance, can you pull the container logs during the error? This should give an indication as to what is going wrong throwing that error.
Step 1: docker logs -f anythingllm, as shown below
Step 2: When I type something into the chat box, the terminal automatically stops and the docker container adds a few errors
Almost certainly the pinned issue - https://github.com/Mintplex-Labs/anything-llm/issues/1331
TLDR; CPU is too old and does not support AVX2 instruction set. Can you confirm the type of CPU running the machine? This has to do with LanceDB as the vector db
Almost certainly the pinned issue - #1331
TLDR; CPU is too old and does not support AVX2 instruction set. Can you confirm the type of CPU running the machine? This has to do with LanceDB as the vector db
CPU model is intel I5 10400T QSRL
If you use another vector database solution does this error go away? Otherwise, this is something else entirely causing the container to exit.
If you use another vector database solution does this error go away? Otherwise, this is something else entirely causing the container to exit.
Ok, I will try using another vector database, think you very much .
If you use another vector database solution does this error go away? Otherwise, this is something else entirely causing the container to exit.
Ok, I will try using another vector database, think you very much .
@showcup any report on this? using other vector database works for you?