Shamane Siri
Shamane Siri
It would be amazing to have a fine-tuning script for E5 models. Do we need hard negatives when finetuning for downstream tasks?
As far as I understand in this [line](https://github.com/cg123/mergekit/blob/mixtral/mergekit/scripts/mixtral_moe.py#L137), it seems like we subtract negative prompt embeddings from positive prompt embeddings. What is the reason for this? @cg123 @DocShotgun @q5sys
I tried to install this with pip and the source and gave the same error.
## 🐛 Bug This t[ext transformer](https://github.com/Lightning-AI/tutorials/blob/main/lightning_examples/text-transformers/text-transformers.py) tutorial fails with the transformers==4.31.0 version **and works fine with transformers==4.27.0 and bellow** ### To Reproduce Run the notebook Error: " File "/opt/conda/envs/cb-ml/lib/python3.10/site-packages/torch/autograd/__init__.py", line...
How can I evaluate summaries on the BARTscore.
What is the gamma function you have used to implement NBL likelihood ?
Could you please let me know the best way to create the **above-mentioned .npy file** given a CSV with a passage and their title?