Brett England
Brett England
I ran into this too. I created a larger memory buffer for the chat engine and this solved the problem. Why isn't the default ok? Inside `llama_index` this is automatically...
Apart from code that has to be modified and what to change to increase the token buffer. What more do you want? Those are the detailed instructions. If you are...
@HenrikPedDK This https://github.com/imartinez/privateGPT/pull/1750 will skip bad files and not stop ingesting.
Long index loads are fixed with https://github.com/zylon-ai/private-gpt/pull/1763
https://github.com/imartinez/privateGPT/blob/84ad16af80191597a953248ce66e963180e8ddec/private_gpt/components/vector_store/vector_store_component.py#L133
I tried those images, and all still resulted in the illegal instruction. Thanks for the extra images to test with. If I can find some cycles, I will clone the...
You have switched embedding models Probably from `BAAI/bge-small-en-v1.5` (384) to `nomic-embed-text` (768) or visa-versa. Either way when you do this the vector dimensions change. You must wipe the DB and...
Why are you calling serve anyway? I'm on a mac and I've never had to use this command. If you use the installer for ollama on the mac it installed...
Testing using Postgres 14 docstore with 2,421,062 rows With PR 1763 elapsed time **12s** ``` 22:58:37.941 [INFO ] llama_index.core.indices.loading - Loading all indices. 22:58:49.454 [INFO ] private_gpt.ui.ui - Mounting the...
> Thanks for the suggestion @dbzoo > > extra: If the default keep_alive is left unchanged, I don't wrap, leaving the requests just like they used to be :) Thanks...