chaofan

Results 117 comments of chaofan

reranker是不需要计算通过encode计算embedding的,reranker用的是compute功能,传入query和passage,计算获得一个score

1. During fine-tuning, the number of hard negatives for each query is train_group_size - 1, so a larger train_group_size is better. 2. During fine-tuning, all in-batch passages across all GPUs...

你好,目前bge没有用于图文检索的reranker

The dataset above is sourced from the reference paper in [BEIR](https://arxiv.org/abs/2104.08663). The nli dataset is sourced from [sentence-transformers/nli-for-simcse](https://huggingface.co/datasets/sentence-transformers/nli-for-simcse).

可以用`scores = compute_score(pairs, cutoff_layers=[28])`获得教师分数,`normalize`是不需要的,`cutoff_layers`视情况而定

You should set the corresponding parameters to `bge-en-icl`, including the [model parameters](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/evaluation#2-modelargs) and [MTEB parameters](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/evaluation#1-mteb). If you encounter any bugs or errors, please feel free to provide feedback.

You can try to clone the repository and install: ``` git clone https://github.com/FlagOpen/FlagEmbedding.git cd FlagEmbedding # If you do not need to finetune the models, you can install the package...