Raphael Sourty
Raphael Sourty
Hi @filippo82, I think it could be cool to add distillation loss of course. I plan to improve the loss function of the train module in the following weeks, there...
Hi @filippo82, I did release neural-cherche 1.1.0 which improve loss stability and brings better default parameters to models. Also I did release [neural-tree](https://github.com/raphaelsty/neural-tree) in order to accelerate ColBERT. Feel free...
Congratulation for this amazing work @bclavie 🤩, Thank you also for the documentation with the DataLoader. I'll run your branch in the following days to make sure everything run smoothly...
I don't have multiples GPUs (not even once) at home so I cannot mimic your environment. I propose to add the `accelerate` attribute to all the models. If set to...
`Hey, did you submit the comments? I can't see the suggested code anywhere, though it might be me being holiday-tired...` Ahah missed this, sorry. ``` > (as for neural-cherche itself,...
Hi @KAGAII, make sure you update neural-cherche using `pip install neural-cherche --upgrade` to get the 1.4.3 version. ```python from neural_cherche import models, rank, retrieve, utils device = "cpu" # or...
@KAGAII There is definitely something wrong with SparseEmbed right now, we recently updated SparseEmbed but we may need to update it back to the previous version @arthur-75. I'll make an...
Thank you @tom9358, I'll update the requirements ! 😀
This issue is solved, we accelerated the tests :)
@NohTow Previously done in [cherche faiss index](https://github.com/raphaelsty/cherche/blob/main/cherche/index/faiss_index.py), it's the way to go I think to initialize the tree once you start adding documents. I'll handle this MR unless you want...