FlagEmbedding
FlagEmbedding copied to clipboard
Retrieval and Retrieval-augmented LLMs
When I use the demo of BGE-VL, without any change: `import torch from transformers import AutoModel from PIL import Image MODEL_NAME= "./BGE-VL-MLLM-S2" model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) model.eval() model.cuda() with torch.no_grad():...
请问现在是否bge-vl-base和bge-vl-large转onnx和相应的推理的实现呢?
有计划支持BGE-VL的训练吗
Hello: 请问如何能够在返回参数中有能够像openai的参数那样,有如下图所示的model这个参数,但是现在的接口没有这个参数的返回 
你好,我在用eval_cross_encoder.py 评估公开的的bge-reranker-large模型时候报错,如下所示。我用的代码是最新的https://github.com/FlagOpen/FlagEmbedding/blob/master/research/C_MTEB/eval_cross_encoder.py,MTEB包用的是1.15.0版本。请问该如何解决,``` ERROR:mteb.evaluation.MTEB:Error while evaluating MIRACLReranking: 'BaseReranker' object has no attribute 'encode' Traceback (most recent call last): File ".../FlagEmbedding/research/C_MTEB/eval_cross_encoder.py", line 27, in evaluation.run(model, output_folder=f"reranker_results/{save_name}") ... ... ... File ".../miniforge3/envs/myenv/lib/python3.10/site-packages/mteb/evaluation/evaluators/model_encode.py", line...
fix broken link
设备不兼容torch.nn的一些算子,onnx模型可以支持同时encode文本和图像吗
使用的显卡是A800,在设置passage_max_len=4096时,per_device_train_batch_size只能设置为2,使用examples中的ds_stage0.json时内存占用62G。  使用我自己写的ds_stage3.json时,per_device_train_batch_size=2,内存占用56G。虽然显存占用有所减少,但仍不足将passage_max_len提升到8192。  ds_stage3.json配置如下 `{ "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9,...