mamba icon indicating copy to clipboard operation
mamba copied to clipboard

error installing "pip install mamba-ssm"

Open achintyasarkar opened this issue 1 year ago • 13 comments

Hi All,

I am trying to install mamba using the "pip install mamba-ssm" and getting the following error: Could you please let me know how to resolve the issue.

Collecting mamba-ssm Using cached mamba_ssm-2.2.2.tar.gz (85 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [13 lines of output] /tmp/pip-install-dgoiwa87/mamba-ssm_8a9b8515a5cc47a080d41658836a2fc6/setup.py:119: UserWarning: mamba_ssm was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc. warnings.warn( Traceback (most recent call last): File "", line 2, in File "", line 34, in File "/tmp/pip-install-dgoiwa87/mamba-ssm_8a9b8515a5cc47a080d41658836a2fc6/setup.py", line 189, in if bare_metal_version >= Version("11.8"): NameError: name 'bare_metal_version' is not defined

Best, Achintya

achintyasarkar avatar Jul 28 '24 12:07 achintyasarkar

Hi Achintya, I am facing the same issue. Did you get any resolutions?

sidharthsahoodev avatar Jul 29 '24 05:07 sidharthsahoodev

I resolve this by installing mamba-ssm while having access to a gpu

VivienCabannes avatar Jul 29 '24 13:07 VivienCabannes

Hi, I also encountered the same issue, any idea for how to resolve this?

William-HYWu avatar Aug 13 '24 03:08 William-HYWu

I resolve this by installing mamba-ssm while having access to a gpu

Hi Vivien. Could you please elaborate about "having access to a gpu"? I'm trying to install this on 4090 and a6000, but it didn't work

William-HYWu avatar Aug 13 '24 03:08 William-HYWu

Well I was on a machine with a GPU but I did not "lease it" so torch was not seeing any GPU (torch.cuda.is_available() was outputting "False"). When I lease a GPU (torch.cuda.is_available() became "True) the issue subsided. NB: In my case, the machine was on a server, hence my need to request a GPU.

VivienCabannes avatar Aug 14 '24 15:08 VivienCabannes

I also experience the exact same issue. I had a look at the source code of setup.py.

The issue is caused by variable bare_metal_version not being present when line 188 of setup.py is reached. And the variable is defined in line 168: _, bare_metal_version = get_cuda_bare_metal_version(CUDA_HOME). But that line is only executed if CUDA_HOME is not None (line 167). And CUDA_HOME is defined and not null only if PyTorch was installed with a GPU present on the machine.

Is it possible to make this package available also for machines without a GPU? Does it make sense to run this model on a CPU? Of course using the model with a GPU would be much faster, but maybe for playing around with small datasets, a CPU would be sufficient.

enricogandini avatar Aug 20 '24 11:08 enricogandini

I also experience a similar issue.

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [7 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-018cntcf/mamba-ssm_a84010880cdd441e899b800a85d549f5/setup.py", line 20, in <module>
          from torch.utils.cpp_extension import (
      ImportError: cannot import name 'HIP_HOME' from 'torch.utils.cpp_extension' (/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py)
      No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

I am trying to build a docker image on a server, i.e., I do not have permission to give GPU access. But I can see that torch.cuda.is_available() is also False.

Has someone found a workaround?

rozariwang avatar Aug 30 '24 16:08 rozariwang

@rozariwang See if you can switch your docker image to one with NVCC installed by default, like one from https://hub.docker.com/r/pytorch/pytorch. I just ran into a similar issue and noticed that this is suggested in the pip warning

UserWarning: mamba_ssm was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.

OFSkean avatar Sep 19 '24 15:09 OFSkean

@OFSkean Thanks a lot! Yes, you're right! I switched the base image, and it is working now :D

rozariwang avatar Sep 22 '24 08:09 rozariwang

Hi, for those who are still interested, I installed the cuda-toolkit: conda install cuda-toolkit==12.1 -c nvidia then your pip install mamba-ssm works just fine. Regards

emenand avatar May 21 '25 10:05 emenand

I have a GPU on my device (local computer: 4060), but I still get the same error - anybody know why? My torch version is 2.6.0+cu124, and nvcc is available:

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0

GodRishUniverse avatar Jun 24 '25 01:06 GodRishUniverse

Have you tried python -m pip install mamba-ssm --no-build-isolation ?

emenand avatar Jun 27 '25 09:06 emenand

Have you tried python -m pip install mamba-ssm --no-build-isolation ?

trying this raised a FileNotFoundError in get_cuda_bare_metal_version even though I have nvcc installed:

PS C:\Users\malice> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Apr__9_19:29:17_Pacific_Daylight_Time_2025
Cuda compilation tools, release 12.9, V12.9.41
Build cuda_12.9.r12.9/compiler.35813241_0

alicemabille avatar Aug 27 '25 10:08 alicemabille