Iván Martínez
Iván Martínez
The message is ok as the initial message, but it should not just exit after that. This is how it should look like: ``` Using embedded DuckDB with persistence: data...
You probably had an existing database from a previous ingestion. We changed the model used to do the ingestion, and that made previous databases incompatible with new ingestions (vector dimensions...
Looks like an issue with the different python versions. I'd suggest to create a python virtual environment for the project, and install dependencies there. Here are the steps (just create...
Awesome review @PulpCattel! @abhiruka I agree this adds a lot of value, please review the comments of the current review and we'll be good to go. Also, very valuable ideas...
We moved away from llama embeddings. Pull the latest changes, install requirements, remove the `db` folder, and run the ingestion again.
@vnk8071 @roy-mootsana @sime2408 thanks a lot for this, and sorry for not prioritizing this into main. Please don't think I'm ghosting this idea, on the contrary, I'm trying to build...
Hey @sime2408 I totally agree. Discord will bring a more sync and rich communication which is great for discussions and agreements but also requires a big amount of attention, presence,...
Maybe we should try this instead @maozdemir https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html?highlight=TextLoader#c-auto-detect-encodings
I don't think increasing that much the chunk size would be beneficial as a default value, given it'd make the prompt way larger increasing the response times of the LLM....