mamba icon indicating copy to clipboard operation
mamba copied to clipboard

Building wheel for mamba-ssm (pyproject.toml) did not run successfully.

Open adib3552 opened this issue 7 months ago • 3 comments

I am getting this error:

`error: subprocess-exited-with-error

× Building wheel for mamba-ssm (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for mamba-ssm (pyproject.toml) ... error ERROR: Failed building wheel for mamba-ssm Successfully built causal-conv1d Failed to build mamba-ssm ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (mamba-ssm)`

I have tried to manually download the wheel from releases but then i get

`ImportError Traceback (most recent call last) /tmp/ipykernel_35/3881790.py in <cell line: 0>() 3 import torch.nn as nn 4 import torch.nn.functional as F ----> 5 from mamba_ssm.modules.mamba_simple import Mamba 6 from torch.optim import Adam 7 import torch.optim as optim

/usr/local/lib/python3.11/dist-packages/mamba_ssm/init.py in 1 version = "2.2.4" 2 ----> 3 from mamba_ssm.ops.selective_scan_interface import selective_scan_fn, mamba_inner_fn 4 from mamba_ssm.modules.mamba_simple import Mamba 5 from mamba_ssm.modules.mamba2 import Mamba2

/usr/local/lib/python3.11/dist-packages/mamba_ssm/ops/selective_scan_interface.py in 16 from mamba_ssm.ops.triton.layer_norm import _layer_norm_fwd 17 ---> 18 import selective_scan_cuda 19 20

ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory ` I am using kaggle. I get same error on Colab. How to fix it?

adib3552 avatar May 22 '25 10:05 adib3552

I was able to fix this issue by downloading Mamba git repo, merging PR #729 into my local repo, and downloading from source. Commands are:

pip install "causal-conv1d>=1.4.0"
git clone https://github.com/state-spaces/mamba.git
cd mamba
CAUSAL_CONV1D_FORCE_BUILD=TRUE CAUSAL_CONV1D_SKIP_CUDA_BUILD=TRUE CAUSAL_CONV1D_FORCE_CXX11_ABI=TRUE pip install --no-build-isolation .

I tried this using CUDA 12.8 and PyTorch 2.7.0 which did not work. Switched to CUDA 12.4 and PyTorch 2.4.1 and it worked.

(********) root@*********:~# pip show torch
Name: torch
Version: 2.4.1+cu124
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /usr/local/lib/python3.11/dist-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: causal-conv1d, lm_eval, mamba-ssm, peft, torchaudio, torchvision

(**********) root@**********:~# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0


Referenced these two threads to solve:

#719 #634

YVSemlani avatar May 23 '25 20:05 YVSemlani

I was able to fix this issue by downloading Mamba git repo, merging PR #729 into my local repo, and downloading from source. Commands are:

pip install "causal-conv1d>=1.4.0"
git clone https://github.com/state-spaces/mamba.git
cd mamba
CAUSAL_CONV1D_FORCE_BUILD=TRUE CAUSAL_CONV1D_SKIP_CUDA_BUILD=TRUE CAUSAL_CONV1D_FORCE_CXX11_ABI=TRUE pip install --no-build-isolation .

I tried this using CUDA 12.8 and PyTorch 2.7.0 which did not work. Switched to CUDA 12.4 and PyTorch 2.4.1 and it worked.

(********) root@*********:~# pip show torch
Name: torch
Version: 2.4.1+cu124
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /usr/local/lib/python3.11/dist-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: causal-conv1d, lm_eval, mamba-ssm, peft, torchaudio, torchvision

(**********) root@**********:~# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0

Referenced these two threads to solve:

#719 #634

i have tried it but i get error when trying to import Mamba from mamba_ssm

ImportError Traceback (most recent call last) /tmp/ipykernel_35/299548030.py in <cell line: 0>() 3 import torch.nn as nn 4 import torch.nn.functional as F ----> 5 from mamba_ssm import Mamba 6 from torch.optim import Adam 7 import torch.optim as optim

/kaggle/working/mamba/mamba_ssm/init.py in 1 version = "2.2.4" 2 ----> 3 from mamba_ssm.ops.selective_scan_interface import selective_scan_fn, mamba_inner_fn 4 from mamba_ssm.modules.mamba_simple import Mamba 5 from mamba_ssm.modules.mamba2 import Mamba2

/kaggle/working/mamba/mamba_ssm/ops/selective_scan_interface.py in 16 from mamba_ssm.ops.triton.layer_norm import _layer_norm_fwd 17 ---> 18 import selective_scan_cuda 19 20

ImportError: /usr/local/lib/python3.11/dist-packages/selective_scan_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c107WarningC1ESt7variantIJNS0_11UserWarningENS0_18DeprecationWarningEEERKNS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEb

adib3552 avatar May 24 '25 04:05 adib3552

hey @tridao, any chance you could build the torch 2.7 wheels for causal_conv1d and mamba_ssm? and if you have access to an AMD GPU, ROCm wheels would also be welcomed :)

Thanks!

maximilienleclei avatar May 25 '25 01:05 maximilienleclei