bevfusion
bevfusion copied to clipboard
Conda Environment:ModuleNotFoundError: No module named 'flash_attn'
When I do a test ,there is an error here, but I can't find the reason. test command: torchpack dist-run -np 1 python tools/test.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml pretrained/swint-nuimages-pretrained.pth --eval bbox --out box.pkl and I got this error:
Traceback (most recent call last):
File "tools/test.py", line 16, in
from mmdet3d.models import build_model
File "/home/lwx/bev/bevfusion/mmdet3d/models/init.py", line 1, in
from .backbones import *
File "/home/lwx/bev/bevfusion/mmdet3d/models/backbones/init.py", line 9, in
from .radar_encoder import *
File "/home/lwx/bev/bevfusion/mmdet3d/models/backbones/radar_encoder.py", line 18, in
from flash_attn.flash_attention import FlashMHA
ModuleNotFoundError: No module named 'flash_attn'
Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[64587,1],0] Exit code: 1
I tried to install flash_attn module, and it failed. Because the cuda and pytorch version that flash_atten requires exceed the required version for bevfusion.
pip install flash-attn==0.2.0 easy fix
pip install flash-attn==0.2.0 easy fix
Thank you for your reply, but I have tried it before and it doesn't work。 It may still be due to the version I mentioned above
ERROR: Command errored out with exit status 1:
command: /home/lwx/anaconda3/envs/mitbevfusion/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-2fqtkd28/flash-attn_be277a22af024b49bf1faa696bb97f10/setup.py'"'"'; file='"'"'/tmp/pip-install-2fqtkd28/flash-attn_be277a22af024b49bf1faa696bb97f10/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-qmma7sl8
cwd: /tmp/pip-install-2fqtkd28/flash-attn_be277a22af024b49bf1faa696bb97f10/
Complete output (20 lines):
Traceback (most recent call last):
File "
torch.version = 1.10.0+cu111
install and then re-compile
python setup.py develop
liwenxiang12 @.***> 于2023年8月16日周三 20:42写道:
pip install flash-attn==0.2.0 easy fix
Thank you for your reply, but I have tried it before and it doesn't work
— Reply to this email directly, view it on GitHub https://github.com/mit-han-lab/bevfusion/issues/492#issuecomment-1681565091, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALBJQH6WEDM27G3TBLMFJADXVWHLLANCNFSM6AAAAAA3SUIKQA . You are receiving this because you commented.Message ID: @.***>
Just comment the code from flash_attn.flash_attention import FlashMHA
and the code from .radar_encoder import *
.
Because they are used for radar.
Hope this can help you.
pip install flash-attn==0.2.0 easy fix
I have met this problem, and your reply helped me. Thanks!
Try this.
/bevfusion/mmdet3d/models/backbones/radar_encoder.py :
from flash_attn.flash_attention import FlashMHA → from flash_attn.modules.mha import MHA
I think this error is caused by the difference between the version 1.0.9 and 2.0.0 of flash-attention.