byaldi
byaldi copied to clipboard
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
This provides ability to change the cache directory for ColPali models. Closes Issue: https://github.com/AnswerDotAI/byaldi/issues/62
Hi, I would like to ask if there is support for processing the pages of a document asynchronously. Additionally, is it possible to process multiple documents concurrently? Since the processing...
I am encountering an issue with the vidore/colqwen2-v1.0 model. Every time I run my script, the model downloads two versions: models--vidore--colqwen2-v1.0 models--vidore--colqwen2-base This occurs even though I have the two...
Hello Byaldi Team! ## Description I added BitsAndBytes support for all us GPU-poor people. This enables 4-bit/8-bit quantization to run the models on smaller GPUs or, in my case, leave...
With ByAldi doing indexing/retrieval on the page level by default, I wanted to extract data (e.g: {List[Question: Answer], Title}) from each page and add that to the page metadata and...
How to call remote ollama How to call ollama service through URL
### Changes Overview - Added method save_pretrained() to RAGMultiModalModel - Added helpful examples showing saving indexes in a specific directory using "index_root"
VisRAG show impressive performance compare with ColPali on several benchmark. Could you integrate it @bclavie https://github.com/openbmb/visrag
When indexing big document corpus, the embedding runs slower and slower. The reason is at each iteration, all the embedding vectors are stored, instead of only the newly created ones....
How to Load Locally Downloaded Models (vidore/colqwen2-v1.0-hf or vidore/colqwen2-v1.0) in byaldi?
Thank you for your excellent work on byaldi! 🙏 I’ve manually downloaded the vidore/colqwen2-v1.0-hfand vidore/colqwen2-v1.0models from Hugging Face ([link](https://huggingface.co/vidore/models)) to my server. Could you clarify how to properly integrate these...