FlagEmbedding icon indicating copy to clipboard operation
FlagEmbedding copied to clipboard

Retrieval and Retrieval-augmented LLMs

Results 622 FlagEmbedding issues
Sort by recently updated
recently updated
newest added

Hey, does it make sense to deploy [the reranking model](https://huggingface.co/BAAI/bge-reranker-v2-m3) in triton inference server for efficiency? Or maybe there are other recommendations concerning reranking inference optimization? Did anybody elaborate on...

Hi teams, I'm trying to finetune something on my mac with M4 chip, the script is here: ` torchrun --nproc_per_node 1 \ -m FlagEmbedding.finetune.embedder.encoder_only.m3 \ --model_name_or_path /Users/nc/python/FlagNew/models/bge-m3 \ --train_data /Users/nc/python/FlagNew/examples/finetune/embedder/example_data/sts/sts.jsonl...

我对 bge-m3 模型进行[微调](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune/embedder#2-bge-m3) 后发现一些配置文件丢失了,这些文件包括: 1. modules.json 2. config_sentence_transformers.json 3. 1_Pooling/config.json 这会导致以下问题: 1. No sentence-transformers model found with name xxx. Creating a new one with mean pooling. (c.f. [issues#1238](https://github.com/FlagOpen/FlagEmbedding/issues/1238)) 2. 使用...

我想在CPU机器上安装FlagEmbedding[finetune],命令为: pip install -U FlagEmbedding[finetune] 安装过程中会自动安装flash_attn,但是flash_attn似乎是不支持CPU的,会报错: Collecting flash-attn (from FlagEmbedding[finetune]) Using cached https://mirrors.aliyun.com/pypi/packages/11/34/9bf60e736ed7bbe15055ac2dab48ec67d9dbd088d2b4ae318fd77190ab4e/flash_attn-2.7.4.post1.tar.gz (6.0 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully....

Hello, I would like to know whether the Visualized BGE supports queries and candidates that are both composed of text and images.

请问有人遇到过这个问题么?在运行BAAI/bge-reranker-v2.5-gemma2-lightweight示例代码: `from FlagEmbedding import LightWeightFlagLLMReranker reranker = LightWeightFlagLLMReranker('BAAI/bge-reranker-v2.5-gemma2-lightweight', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage'], cutoff_layers=[28], compress_ratio=2, compress_layer=[24, 40])...

python -m FlagEmbedding.evaluation.custom --embedder_name_or_path --reranker_name_or_path 现在只想评测reranker,但是embedder_name_or_path是必须要输入的参数,否则报错,这个如何解决

## 原始bge-m3 ![Image](https://github.com/user-attachments/assets/8627a447-8370-4e89-ba09-f54ed23b2f9f) ## fine-tune ![Image](https://github.com/user-attachments/assets/33d4b403-c8c7-4ef4-b6f1-b516f0ff23ea)

fix typo : normlized -> normalized