Visualized_BGE No module named 'eva_clip'
I'm using BAAI/bge-m3 and BAAI/bge-visualized/Visualized_m3.pth. I download model manually, I give the local path, It gives out error as below:
File "bge_visualized.py", line 3, in <module>
from FlagEmbedding.visual.modeling import Visualized_BGE
File "flag_embed/FlagEmbedding/visual/__init__.py", line 1, in <module>
from .modeling import Visualized_BGE
File "flag_embed/FlagEmbedding/visual/modeling.py", line 12, in <module>
from eva_clip import create_eva_vision_and_transforms
ModuleNotFoundError: No module named 'eva_clip'
Furthermore, by giving local model path, the assertion part will also give out error at here flag_embed\FlagEmbedding\visual\modeling.py line 37
assert model_name_bge in ["BAAI/bge-base-en-v1.5", "BAAI/bge-m3"]
How to solve this?
(1) For the first question, may I ask what your working directory is? After you have cloned the FlagEmbedding repository, by navigating to cd ./FlagEmbedding/FlagEmbedding/visual, you should be able to import eva_clip normally. Alternatively, you can forcefully add the path of visual to the system path using
import sys
sys.path.append("/your/path/to/visual_dir")
(2) For the second question, the path to the local model weight file should be set in model_weight
I used "pip install -e ." and I got these problems.
2nd problem will be definitely happen if I give local path.
Actually, I've tried to modify source code to solve this, but I found that there are more than one path issues. I've modified several points, and I keep getting errors. It's not convenient to synchronize the code if I modify these. Finally, I got error about clip model error which is not path issue.
I hope this could be solved by just using "pip install -e ." to install everything.
天天高兴! @.***
------------------ 原始邮件 ------------------ 发件人: "FlagOpen/FlagEmbedding" @.>; 发送时间: 2024年3月27日(星期三) 晚上10:23 @.>; @.@.>; 主题: Re: [FlagOpen/FlagEmbedding] Visualized_BGE No module named 'eva_clip' (Issue #627)
(1) For the first question, may I ask what your working directory is? After you have cloned the FlagEmbedding repository, by navigating to cd ./FlagEmbedding/FlagEmbedding/visual, you should be able to import eva_clip normally. Alternatively, you can forcefully add the path of visual to the system path using
import sys sys.path.append("/your/path/to/visual_dir")
(2) For the second question, the path to the local model weight file should be set in model_weight
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Regarding the first issue:
Currently, we recommend that you use and load this model in the FlagEmbedding/visual directory of the repository. The eva_clip should load normally, and typically, there is no need to specify an additional directory.
We will soon update the code to make it more useful, enabling you to use the model anywhere with ease.
As for the second issue, please note that there are two args. The model_name_bge is the model's name and must be one of ["BAAI/bge-base-en-v1.5", "BAAI/bge-m3"]; you cannot alter its name. The other arg, model_weight, is used to provide your local model weight file.
If you have any more questions or suggestions, please feel free to tell us.
thanks for your reply, I still got some problem as below:
- accoreding to your suggestions, I put code under FlagEmbedding/visual, and I got error as below:
Traceback (most recent call last):
File "/home/flag_embed/FlagEmbedding/visual/bge_visualized.py", line 13, in <module>
candi_emb_1 = model.encode(text="The Mid-Hudson Bridge, spanning the Hudson River between Poughkeepsie and Highland.", image="./imgs/wiki_candi_1.jpg")
File "/home/flag_embed/FlagEmbedding/visual/modeling.py", line 101, in encode
return self.encode_mm(image, text)
File "/home/flag_embed/FlagEmbedding/visual/modeling.py", line 205, in encode_mm
img_token_emb = self.img_token_embedding(images) #[B, Patch_num, C]
File "/home/flag_embed/FlagEmbedding/visual/modeling.py", line 284, in img_token_embedding
img_token_emb = self.model_visual.encode_image(images, normalize=False) # return_all_features=True, [B, Patch_num, C]
File "/home/flag_embed/FlagEmbedding/visual/eva_clip/model.py", line 309, in encode_image
features = self.visual(image)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/flag_embed/FlagEmbedding/visual/eva_clip/eva_vit_model.py", line 529, in forward
return self.forward_features(x, return_all_features)
File "/home/flag_embed/FlagEmbedding/visual/eva_clip/eva_vit_model.py", line 517, in forward_features
x = blk(x, rel_pos_bias=rel_pos_bias)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/flag_embed/FlagEmbedding/visual/eva_clip/eva_vit_model.py", line 293, in forward
x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias, attn_mask=attn_mask))
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/flag_embed/FlagEmbedding/visual/eva_clip/eva_vit_model.py", line 208, in forward
x = xops.memory_efficient_attention(
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 337, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw
return _run_priority_list(
File "/home/miniconda3/envs/train_py310/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 257, 16, 64) (torch.float32)
key : shape=(1, 257, 16, 64) (torch.float32)
value : shape=(1, 257, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
`decoderF` is not supported because:
device=cpu (supported: {'cuda'})
attn_bias type is <class 'NoneType'>
`[email protected]` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
`tritonflashattF` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
operator wasn't built - see `python -m xformers.info` for more info
triton is not available
Only work on pre-MLIR triton for now
`cutlassF` is not supported because:
device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
device=cpu (supported: {'cuda'})
has custom scale
unsupported embed per head: 64
- I followed with README.md to do steps as below.
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
After the install, I meant to ignore all source code, however, it still gives "eva_clip" could not be found error. How to solve this, I just want to make source code installed directly for convenience but not to do inference with the source code.
- FlagEmbedding/visual/modeling.py does have bug of assert with the name. I run code as below. I will meet assertion error.
####### Use Visualized BGE doing multi-modal knowledge retrieval
import torch
from FlagEmbedding.visual.modeling import Visualized_BGE
bge_m3_mdl_path = '/data/nvme0/BAAI/bge-m3'
visual_mdl_path = '/data/nvme0/BAAI/bge-visualized'
model = Visualized_BGE(model_name_bge = bge_m3_mdl_path, model_weight=visual_mdl_path)
with torch.no_grad():
query_emb = model.encode(text="Are there sidewalks on both sides of the Mid-Hudson Bridge?")
candi_emb_1 = model.encode(text="The Mid-Hudson Bridge, spanning the Hudson River between Poughkeepsie and Highland.", image="./imgs/wiki_candi_1.jpg")
candi_emb_2 = model.encode(text="Golden_Gate_Bridge", image="./imgs/wiki_candi_2.jpg")
candi_emb_3 = model.encode(text="The Mid-Hudson Bridge was designated as a New York State Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1983. The bridge was renamed the \"Franklin Delano Roosevelt Mid-Hudson Bridge\" in 1994.")
sim_1 = query_emb @ candi_emb_1.T
sim_2 = query_emb @ candi_emb_2.T
sim_3 = query_emb @ candi_emb_3.T
print(sim_1, sim_2, sim_3) # tensor([[0.6932]]) tensor([[0.4441]]) tensor([[0.6415]])
Hi,
-
Please uninstall xformer and apex. They are not essential for inference and can often cause issues.
-
I have updated the code. After installing FlagEmbedding, you can now
from FlagEmbedding.visual.modeling import Visualized_BGEanywhere in your code. -
model_name_bgeis not a file path; it is simply a name used to indicate which model to load, and it must be one of["BAAI/bge-base-en-v1.5", "BAAI/bge-m3"]. In your code,bge_m3_mdl_pathshould be set to"BAAI/bge-m3". I hope this clarification helps to resolve your issue.
If you have any more questions, please feel free to tell me.
thanks for your reply,
- for question 1, I uninstalled xformers, then this issue solved
- for question 3, I comment assertion and if branch in source code to solve. If I set to "BAAI/bge-m3", it will automatically download model from hugging face, however, I have problem with connecting it. If I use local path, I will meet assertion problem. Thus, there should be one more variable to set bge_m3_mdl_path. This is a problem that I can't set local path for bge_m3_mdl_path, I suggest this bug can be fixed.
for question 3, I modified code under FlagEmbedding/visual/modeling.py as below, and it works.
class Visualized_BGE(nn.Module):
def __init__(self,
model_name_bge: str = None,
model_weight = None, # "/path/to/your/weight/file/"
normlized: bool = True,
sentence_pooling_method: str = 'cls',
negatives_cross_device: bool = False,
temperature: float = 0.02, # 1.0
):
super().__init__()
# assert model_name_bge in ["BAAI/bge-base-en-v1.5", "BAAI/bge-m3"]
assert model_weight is not None
name_flag = False
for name in ["bge-base-en-v1.5", "bge-m3"]:
if name in model_name_bge:
name_flag = True
if not name_flag:
raise ValueError(f"model_name_bge should not be {name}")
self.model_name_bge = model_name_bge
# if model_name_bge == 'BAAI/bge-base-en-v1.5':
# model_name_eva = "EVA02-CLIP-B-16"
# self.hidden_dim = 768
# self.depth = 12
# elif model_name_bge == 'BAAI/bge-m3':
# model_name_eva = "EVA02-CLIP-L-14"
# self.hidden_dim = 1024
# self.depth = 24
if 'bge-base-en-v1.5' in model_name_bge:
model_name_eva = "EVA02-CLIP-B-16"
self.hidden_dim = 768
self.depth = 12
elif 'bge-m3' in model_name_bge:
model_name_eva = "EVA02-CLIP-L-14"
self.hidden_dim = 1024
self.depth = 24
i had same problem , and i found /opt/anaconda3/envs/llm/lib/python3.10/site-packages/FlagEmbedding/visual/eva_clip/ not had any dir of /FlagEmbedding/visual/eva_clip/model_configs
i copy it to the site-packeges dir , it run succ!