Javier Martinez
Javier Martinez
Please update to last changes in main. Also, you're running Python 3.10. Can you upgrade to 3.11?
Hey @generall, First of all, thank you for your faster reply! Talking about my issue, our workflow is a basic RAG pipeline. We ingest some documents, partition and process them,...
Hey @generall, I'm aware of this, it's our production configuration. The screenshots I've attached show the `on_disk=true` setting in both `vectors` and `hnsw_config`. However, I noticed that the `payload_index` is...
Hey @generall, How can I config `on_disk` in payload index? I was checking Qdrant client documentation and I could not find it: https://api.qdrant.tech/api-reference/indexes/create-field-index My current configuration is: ``` dense_config =...
After repeating the same experiment, the behavior has changed but has not been fully resolved. I ran the same experiment, and after a few minutes of doing the same as...
With 256Mi, it did crash with this configuration, but with 512Mi, it worked fine with 40 concurrent users. I’ve attached the current configuration we have. ``` { "params": { "vectors":...
I read it before opening the issue. We understand that our case is the second one, but I would like to clarify why using default RAM is the qdrant selection
Can you confirm us your `ollama` version? I'm using 0.1.48 without any issue
Has you pulled last changes? Have you reinstalled dependencies and extras?
Could you give us more details: LLM backend, if it's running on Docker, etc? Please check https://github.com/zylon-ai/private-gpt/issues/1955 if you are using LlamaCPP