FlagEmbedding
FlagEmbedding copied to clipboard
Retrieval and Retrieval-augmented LLMs
https://github.com/FlagOpen/FlagEmbedding/issues/703
PS D:\BGE> pip install -U FlagEmbedding[finetune] Requirement already satisfied: FlagEmbedding[finetune] in d:\anaconda\envs\instructor\lib\site-packages (1.3.4) Requirement already satisfied: torch>=1.6.0 in d:\anaconda\envs\instructor\lib\site-packages (from FlagEmbedding[finetune]) (2.5.0) Requirement already satisfied: transformers>=4.44.2 in d:\anaconda\envs\instructor\lib\site-packages (from FlagEmbedding[finetune])...
你好!请问能否提供一下ICLR‘25 Activation Beacon pretrain的llama2-7b和mistral-7b模型,目前只能在Peitian的主页找到qwen2-7b的pretrain模型,十分感谢!!
Hi, I was looking at the BAAI/bge-multilingual-gemma2 model. Then I use GPU for inference via transformers, I found it very slow. Takes about several seconds to encode one sentence. Is...
hello, i have a question. Could you help me ? https://github.com/FlagOpen/FlagEmbedding/blob/master/research/visual_bge/visual_bge/modeling.py#L270,this line generate mask by cat mask of img and mask of txt. https://github.com/FlagOpen/FlagEmbedding/blob/master/research/visual_bge/visual_bge/modeling.py#L267,this line generate embedding by cat cls_token,...
作者您好, torchrun --nproc_per_node 4 \ -m FlagEmbedding.finetune.reranker.encoder_only.base \ --model_name_or_path /home/gpu1/SAIT_HQ_XIAN/share/SOLVE/CodeLLM/bge-reranker-large \ --cache_dir $base_path/$version/cache/model \ --train_data /home/gpu1/SAIT_HQ_XIAN/share/SOLVE/yupeng_RAG/SCODE-R/BGE_M3/datas/mined_Neg_20_200.jsonl \ --cache_path $base_path/$version/cache/data \ --train_group_size 8 \ --query_max_len 512 \ --passage_max_len 512 \ --pad_to_multiple_of...
非常好的工作,请问代码和模型何时开源
/ssd2/bge-m3 煤炉 春游 0.3848 红领巾 大肚子 0.417 红领巾 少先队 0.501 epoch1:/ssd2/checkpoint-6133 煤炉 春游 0.503 红领巾 大肚子 0.537 红领巾 少先队 0.6226 epoch2:/ssd2/checkpoint-12264 煤炉 春游 0.4958 红领巾 大肚子 0.5376 红领巾 少先队 0.617...
Hi, What are the model inferencing options to use BGE-M3 model in production scenario? I am using model to obtain hybrid retrieval (both dense and sparse embeddings) as mentioned in...