error installing "pip install mamba-ssm"
Hi All,
I am trying to install mamba using the "pip install mamba-ssm" and getting the following error: Could you please let me know how to resolve the issue.
Collecting mamba-ssm Using cached mamba_ssm-2.2.2.tar.gz (85 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [13 lines of output]
/tmp/pip-install-dgoiwa87/mamba-ssm_8a9b8515a5cc47a080d41658836a2fc6/setup.py:119: UserWarning: mamba_ssm was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
warnings.warn(
Traceback (most recent call last):
File "
Best, Achintya
Hi Achintya, I am facing the same issue. Did you get any resolutions?
I resolve this by installing mamba-ssm while having access to a gpu
Hi, I also encountered the same issue, any idea for how to resolve this?
I resolve this by installing mamba-ssm while having access to a gpu
Hi Vivien. Could you please elaborate about "having access to a gpu"? I'm trying to install this on 4090 and a6000, but it didn't work
Well I was on a machine with a GPU but I did not "lease it" so torch was not seeing any GPU (torch.cuda.is_available() was outputting "False"). When I lease a GPU (torch.cuda.is_available() became "True) the issue subsided. NB: In my case, the machine was on a server, hence my need to request a GPU.
I also experience the exact same issue. I had a look at the source code of setup.py.
The issue is caused by variable bare_metal_version not being present when line 188 of setup.py is reached.
And the variable is defined in line 168: _, bare_metal_version = get_cuda_bare_metal_version(CUDA_HOME). But that line is only executed if CUDA_HOME is not None (line 167). And CUDA_HOME is defined and not null only if PyTorch was installed with a GPU present on the machine.
Is it possible to make this package available also for machines without a GPU? Does it make sense to run this model on a CPU? Of course using the model with a GPU would be much faster, but maybe for playing around with small datasets, a CPU would be sufficient.
I also experience a similar issue.
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-018cntcf/mamba-ssm_a84010880cdd441e899b800a85d549f5/setup.py", line 20, in <module>
from torch.utils.cpp_extension import (
ImportError: cannot import name 'HIP_HOME' from 'torch.utils.cpp_extension' (/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py)
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
I am trying to build a docker image on a server, i.e., I do not have permission to give GPU access. But I can see that torch.cuda.is_available() is also False.
Has someone found a workaround?
@rozariwang See if you can switch your docker image to one with NVCC installed by default, like one from https://hub.docker.com/r/pytorch/pytorch. I just ran into a similar issue and noticed that this is suggested in the pip warning
UserWarning: mamba_ssm was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
@OFSkean Thanks a lot! Yes, you're right! I switched the base image, and it is working now :D
Hi, for those who are still interested, I installed the cuda-toolkit: conda install cuda-toolkit==12.1 -c nvidia then your pip install mamba-ssm works just fine. Regards
I have a GPU on my device (local computer: 4060), but I still get the same error - anybody know why? My torch version is 2.6.0+cu124, and nvcc is available:
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
Have you tried python -m pip install mamba-ssm --no-build-isolation ?
Have you tried python -m pip install mamba-ssm --no-build-isolation ?
trying this raised a FileNotFoundError in get_cuda_bare_metal_version even though I have nvcc installed:
PS C:\Users\malice> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Apr__9_19:29:17_Pacific_Daylight_Time_2025
Cuda compilation tools, release 12.9, V12.9.41
Build cuda_12.9.r12.9/compiler.35813241_0