mamba
mamba copied to clipboard
selective_scan_cuda
Hi, author, where can i find the module named 'selective_scan_cuda'?
Did you follow the installation instruction in the README?
It's here and should be installed if you do pip install mamba-ssm: https://github.com/state-spaces/mamba/tree/main/csrc/selective_scan
Sorry,
but when i followed the installation, i failed to successfully install mamba-ssm, i met the following issue:
I tried both pip install and conda install, and i also tried to pip install . from this repository, all of them couldn't work.
Can you try downloading that wheel URL manually to check if the networking works?
I run into the same problem,
抱歉,但是当我进行安装时,我无法成功安装mamba-ssm,我遇到了以下问题:
我尝试了pip install和conda install,并且还尝试了pip install。从这个存储库中,所有这些都无法工作。 This is because the cuda version is too low and requires cuda 11.6 and above.
抱歉,但是当我进行安装时,我无法成功安装mamba-ssm,我遇到了以下问题:
我尝试了pip install和conda install,并且还尝试了pip install。从这个存储库中,所有这些都无法工作。 This is because the cuda version is too low and requires cuda 11.6 and above.
I solved the issue by following operations:
- I downloaded the mamba_ssm.whl file from https://github.com/state-spaces/mamba/releases, then i pip install the whl file in my working environment.
- I updated my torch version to 2.0.0, and my torchaudio version to 2.0.1, and i used the cudatooklit11.8.
Did you follow the installation instruction in the README? It's here and should be installed if you do
pip install mamba-ssm: https://github.com/state-spaces/mamba/tree/main/csrc/selective_scan
I have done pip install mamba-ssm successfully , but I still have this problem.
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
I have the same problem. Is there any solution
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
I fixed the problem, you can't download the latest whl file, you can just recompile it with the previous whl file, Such as mamba_ssm - 1.1.3. Post1 + cu118torch2. 1 cxx11abifalse - cp310 - cp310 - linux_x86_64. WHL
FWIW, i also had the same problem and was able to solve it by pip install mamba-ssm. Earlier, I tried with pip install .
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
hi, I met the same issue. Have you solved yet?
and mine here is:
Did you follow the installation instruction in the README? It's here and should be installed if you do
pip install mamba-ssm: https://github.com/state-spaces/mamba/tree/main/csrc/selective_scanI have done
pip install mamba-ssmsuccessfully , but I still have this problem.![]()
Have you solved this problem?I meet the same question,can you tell me how to solve it please
I have done pip install mamba-ssm successfully, but selective_scan download is not successful
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
I have the same question, have you be done this.
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
I fixed the problem, you can't download the latest whl file, you can just recompile it with the previous whl file, Such as mamba_ssm - 1.1.3. Post1 + cu118torch2. 1 cxx11abifalse - cp310 - cp310 - linux_x86_64. WHL
I have the same question, i have pip install mamba_ssm but also meet this.
ImportError: /home/yida/miniconda3/envs/mambair/lib/python3.9/site-packages/selective_scan_cuda.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
I have the same question,have you finish this problem?
I have fixed the environment again, now it's okay. You can take a look at the following article. I'm using vision mamba. You can try installing it using a whl file, maybe you don't need to reinstall the environment. Pay attention to matching the versions of PyTorch and Python, etc.
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
https://blog.csdn.net/weixin_45667052/article/details/136311600
I have fixed the environment again, now it's okay. You can take a look at the following article. I'm using vision mamba. You can try installing it using a whl file, maybe you don't need to reinstall the environment. Pay attention to matching the versions of PyTorch and Python, etc.
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whlhttps://blog.csdn.net/weixin_45667052/article/details/136311600
Thanks very much, I have down this question by your answer. Can success follow this link. The key is 'cp -rf mamba-1p1p1/mamba_ssm /home/yida/miniconda3/envs/mamba/lib/python3.10/site-packages'.
Hi,
I figured out the problem with import mamba_ssm prompting import error:
I found out that it was the problem with using the newest torch.__version__=2.3.
I tested with cuda versions 12.2, 11.8; python 3.9, 3.10; torch 2.1, 2.2, 2.3. Only torch 2.3 is not able to import mamba_ssm. torch 2.1 and 2.2 worked well.
Pictures are shown below.
Hope you can find this useful! :)
v1.2.1 now includes wheels for pytorch 2.3 so it should hopefully fix this issue
v1.2.1 now includes wheels for pytorch 2.3 so it should hopefully fix this issue
Thank you.
@tridao the problem exist for v2.0.3 for pytorch 2.3
我重新修复了一下环境,现在没问题了。你可以看看下面的文章。我用的是 Vision Mamba。你可以尝试使用 whl 文件安装,也许不需要重新安装环境。注意匹配 PyTorch 和 Python 的版本等。
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whlhttps://blog.csdn.net/weixin_45667052/article/details/136311600
非常感谢,我通过你的回答解决了这个问题。可以成功点击此链接。密钥是“cp -rf mamba-1p1p1/mamba_ssm /home/yida/miniconda3/envs/mamba/lib/python3.10/site-packages”。
我重新修复了环境,现在没问题了。你可以看看下面的文章。我用的是 Vision Mamba。你可以尝试使用 whl 文件安装,也许不需要重新安装环境。注意匹配 PyTorch 和 Python 的版本等。
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whlhttps://blog.csdn.net/weixin_45667052/article/details/136311600
非常感谢,我通过你的回答解决了这个问题。可以成功点击此链接。密钥是“cp -rf mamba-1p1p1/mamba_ssm /home/yida/miniconda3/envs/mamba/lib/python3.10/site-packages”。 您好,我已经成功安装以上版本的causal-conv1d和mamba-ssm,但是依旧报错ImportError xxxx causal_conv1d_cuda.cpython-xxx-linux-gnu.so undefined symbol的错误,我将对应版本的源码已替换到site-packages后,依旧报同样的错误,不知道是不是我的操作有问题,如:我下载源码mamba-1.1.1,做cp -rf mamba-1.1.1/mamba_ssm ./***/site-packages .还请您抽空解惑下,非常感谢
我重新修复了环境,现在没问题了。你可以看看下面的文章。我用的是 Vision Mamba。你可以尝试使用 whl 文件安装,也许不需要重新安装环境。注意匹配 PyTorch 和 Python 的版本等。
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whlhttps://blog.csdn.net/weixin_45667052/article/details/136311600
非常感谢,我通过你的回答解决了这个问题。可以成功点击此链接。密钥是“cp -rf mamba-1p1p1/mamba_ssm /home/yida/miniconda3/envs/mamba/lib/python3.10/site-packages”。
我重新修复了环境,现在没问题了。你可以看看下面的文章。我正在使用的是 Vision Mamba。你可以尝试使用 whl 文件安装,也许不需要重新安装环境。注意匹配 PyTorch 和 Python 的版本等。
wget https://github.com/Dao-AILab/causal-conv1d/releases/download/v1.1.3.post1/causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl wget https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install causal_conv1d-1.1.3.post1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl pip install mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whlhttps://blog.csdn.net/weixin_45667052/article/details/136311600
非常感谢,我通过你的回答解决了这个问题。可以成功点击此链接。密钥是“cp -rf mamba-1p1p1/mamba_ssm /home/yida/miniconda3/envs/mamba/lib/python3.10/site-packages”。 您好,我已经成功安装了以上版本causal-conv1d和mamba-ssm,但是依旧报错ImportError xxxx causal_conv1d_cuda.cpython-xxx-linux-gnu.so undefined symbol错误,我将对应版本的源码已替换到site-packages后,依旧报同样的错误,不知道是不是我的操作有问题,如:我下载源码mamba-1.1.1,做cp -rf mamba-1.1.1/mamba_ssm ./***/site-packages 。还请您抽空解惑下,非常感谢 causal-conv1d和mamba-ssm,我做了同样的操作
我尝试了pip install和conda install,并且还尝试了pip install。从这个存储库中,所有这些都无法工作。
This is because the cuda version is too low and requires cuda 11.6 and above.
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.
Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.

Hello author, how should I solve this error? I downloaded the 1.1.4 version of the whl package and installed it.