mamba
mamba copied to clipboard
No grad accumulator for a saved leaf
I get an error. ../torch/csrc/autograd/saved_variable.cpp":216, please report a bug to PyTorch. No grad accumulator for a saved leaf.
It's thrown at mamba_ssm\ops\selective_scan_interface.py backward(ctx, dout)
(xz, conv1d_weight, conv1d_bias, x_dbl, x_proj_weight, delta_proj_weight,
conv1d_out, delta, A, B, C, D, delta_bias, scan_intermediates, out) = ctx.saved_tensors
Could you please give me some hints? Thank you.
RuntimeError: !grad_accumulator_.expired() INTERNAL ASSERT FAILED at "../torch/csrc/autograd/saved_variable.cpp":216, please report a bug to PyTorch. No grad accumulator for a saved leaf
same question,do you solve it?
I got the same error when using cuda 11.7, torch2.0.0.
I have solved my problem.
First, I clean up the useless files in directory where /pip
is in to release enough space.
Then, I copy the files of ./build/
from another server which has compiled successfully to the server which fails compiling.
Last, I use CAUSAL_CONV1D_FORCE_BUILD=TRUE pip install --user -e .
to compile. Maybe --user
is necessary?
Although there is a GCC warning of unsupported gnu_property_type
, the mamba code can run.